Test Report: KVM_Linux_crio 20109

                    
                      a80036b9799ef97ff87d49db0998430356d1f02a:2025-01-20:37996
                    
                

Test fail (23/304)

x
+
TestAddons/parallel/Ingress (152.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-823768 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-823768 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-823768 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66c3042c-5ca2-4e67-bbd5-02c9c84af6ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66c3042c-5ca2-4e67-bbd5-02c9c84af6ea] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004392724s
I0120 15:08:12.435477 2136749 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-823768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.062358054s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-823768 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.158
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-823768 -n addons-823768
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 logs -n 25: (1.468299057s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | -p download-only-647713                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-647713                                                                     | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-193100                                                                     | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-647713                                                                     | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | binary-mirror-318745                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45603                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-318745                                                                     | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | addons-823768                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | addons-823768                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-823768 --wait=true                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | -p addons-823768                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823768 ssh cat                                                                       | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | /opt/local-path-provisioner/pvc-f17509c2-6d0e-4c09-9067-5f1359f0d7a1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-823768 ip                                                                            | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823768 ssh curl -s                                                                   | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-823768 ip                                                                            | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:10 UTC | 20 Jan 25 15:10 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:04:57.624256 2137369 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:04:57.624398 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:57.624409 2137369 out.go:358] Setting ErrFile to fd 2...
	I0120 15:04:57.624415 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:57.624591 2137369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:04:57.625297 2137369 out.go:352] Setting JSON to false
	I0120 15:04:57.626292 2137369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24444,"bootTime":1737361054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:04:57.626412 2137369 start.go:139] virtualization: kvm guest
	I0120 15:04:57.628458 2137369 out.go:177] * [addons-823768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:04:57.630260 2137369 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:04:57.630256 2137369 notify.go:220] Checking for updates...
	I0120 15:04:57.631582 2137369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:04:57.633104 2137369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:04:57.634244 2137369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:57.635455 2137369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:04:57.636773 2137369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:04:57.638391 2137369 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:04:57.672908 2137369 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 15:04:57.674463 2137369 start.go:297] selected driver: kvm2
	I0120 15:04:57.674489 2137369 start.go:901] validating driver "kvm2" against <nil>
	I0120 15:04:57.674515 2137369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:04:57.675362 2137369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:57.675488 2137369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 15:04:57.691694 2137369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 15:04:57.691745 2137369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 15:04:57.691969 2137369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 15:04:57.692005 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:04:57.692050 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:04:57.692059 2137369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 15:04:57.692109 2137369 start.go:340] cluster config:
	{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0120 15:04:57.692209 2137369 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:57.695407 2137369 out.go:177] * Starting "addons-823768" primary control-plane node in "addons-823768" cluster
	I0120 15:04:57.697150 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 15:04:57.697201 2137369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 15:04:57.697211 2137369 cache.go:56] Caching tarball of preloaded images
	I0120 15:04:57.697294 2137369 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 15:04:57.697305 2137369 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 15:04:57.697657 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
	I0120 15:04:57.697681 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json: {Name:mk4b31787ffc80a58bfaed119855eddc3ee78983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:04:57.697836 2137369 start.go:360] acquireMachinesLock for addons-823768: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 15:04:57.697883 2137369 start.go:364] duration metric: took 33.177µs to acquireMachinesLock for "addons-823768"
	I0120 15:04:57.697901 2137369 start.go:93] Provisioning new machine with config: &{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 15:04:57.697959 2137369 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 15:04:57.699982 2137369 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0120 15:04:57.700137 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:04:57.700187 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:04:57.715764 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0120 15:04:57.716302 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:04:57.717042 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:04:57.717071 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:04:57.717464 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:04:57.717672 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:04:57.717839 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:04:57.718072 2137369 start.go:159] libmachine.API.Create for "addons-823768" (driver="kvm2")
	I0120 15:04:57.718100 2137369 client.go:168] LocalClient.Create starting
	I0120 15:04:57.718140 2137369 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 15:04:57.817798 2137369 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 15:04:57.956327 2137369 main.go:141] libmachine: Running pre-create checks...
	I0120 15:04:57.956353 2137369 main.go:141] libmachine: (addons-823768) Calling .PreCreateCheck
	I0120 15:04:57.956945 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:04:57.957429 2137369 main.go:141] libmachine: Creating machine...
	I0120 15:04:57.957442 2137369 main.go:141] libmachine: (addons-823768) Calling .Create
	I0120 15:04:57.957600 2137369 main.go:141] libmachine: (addons-823768) creating KVM machine...
	I0120 15:04:57.957614 2137369 main.go:141] libmachine: (addons-823768) creating network...
	I0120 15:04:57.958969 2137369 main.go:141] libmachine: (addons-823768) DBG | found existing default KVM network
	I0120 15:04:57.959704 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:57.959552 2137391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201200}
	I0120 15:04:57.959765 2137369 main.go:141] libmachine: (addons-823768) DBG | created network xml: 
	I0120 15:04:57.959786 2137369 main.go:141] libmachine: (addons-823768) DBG | <network>
	I0120 15:04:57.959800 2137369 main.go:141] libmachine: (addons-823768) DBG |   <name>mk-addons-823768</name>
	I0120 15:04:57.959807 2137369 main.go:141] libmachine: (addons-823768) DBG |   <dns enable='no'/>
	I0120 15:04:57.959814 2137369 main.go:141] libmachine: (addons-823768) DBG |   
	I0120 15:04:57.959822 2137369 main.go:141] libmachine: (addons-823768) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 15:04:57.959830 2137369 main.go:141] libmachine: (addons-823768) DBG |     <dhcp>
	I0120 15:04:57.959836 2137369 main.go:141] libmachine: (addons-823768) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 15:04:57.959845 2137369 main.go:141] libmachine: (addons-823768) DBG |     </dhcp>
	I0120 15:04:57.959850 2137369 main.go:141] libmachine: (addons-823768) DBG |   </ip>
	I0120 15:04:57.959857 2137369 main.go:141] libmachine: (addons-823768) DBG |   
	I0120 15:04:57.959868 2137369 main.go:141] libmachine: (addons-823768) DBG | </network>
	I0120 15:04:57.959880 2137369 main.go:141] libmachine: (addons-823768) DBG | 
	I0120 15:04:57.965405 2137369 main.go:141] libmachine: (addons-823768) DBG | trying to create private KVM network mk-addons-823768 192.168.39.0/24...
	I0120 15:04:58.037588 2137369 main.go:141] libmachine: (addons-823768) DBG | private KVM network mk-addons-823768 192.168.39.0/24 created
	I0120 15:04:58.037645 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.037543 2137391 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:58.037659 2137369 main.go:141] libmachine: (addons-823768) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
	I0120 15:04:58.037694 2137369 main.go:141] libmachine: (addons-823768) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 15:04:58.037727 2137369 main.go:141] libmachine: (addons-823768) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 15:04:58.314475 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.314330 2137391 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa...
	I0120 15:04:58.360414 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360209 2137391 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk...
	I0120 15:04:58.360466 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing magic tar header
	I0120 15:04:58.360505 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing SSH key tar header
	I0120 15:04:58.360517 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360380 2137391 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
	I0120 15:04:58.360543 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768
	I0120 15:04:58.360562 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 (perms=drwx------)
	I0120 15:04:58.360574 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 15:04:58.360589 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:58.360598 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 15:04:58.360610 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 15:04:58.360622 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins
	I0120 15:04:58.360631 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home
	I0120 15:04:58.360640 2137369 main.go:141] libmachine: (addons-823768) DBG | skipping /home - not owner
	I0120 15:04:58.360718 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 15:04:58.360752 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 15:04:58.360768 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 15:04:58.360782 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 15:04:58.360799 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 15:04:58.360810 2137369 main.go:141] libmachine: (addons-823768) creating domain...
	I0120 15:04:58.362224 2137369 main.go:141] libmachine: (addons-823768) define libvirt domain using xml: 
	I0120 15:04:58.362248 2137369 main.go:141] libmachine: (addons-823768) <domain type='kvm'>
	I0120 15:04:58.362256 2137369 main.go:141] libmachine: (addons-823768)   <name>addons-823768</name>
	I0120 15:04:58.362262 2137369 main.go:141] libmachine: (addons-823768)   <memory unit='MiB'>4000</memory>
	I0120 15:04:58.362267 2137369 main.go:141] libmachine: (addons-823768)   <vcpu>2</vcpu>
	I0120 15:04:58.362272 2137369 main.go:141] libmachine: (addons-823768)   <features>
	I0120 15:04:58.362277 2137369 main.go:141] libmachine: (addons-823768)     <acpi/>
	I0120 15:04:58.362282 2137369 main.go:141] libmachine: (addons-823768)     <apic/>
	I0120 15:04:58.362289 2137369 main.go:141] libmachine: (addons-823768)     <pae/>
	I0120 15:04:58.362296 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362301 2137369 main.go:141] libmachine: (addons-823768)   </features>
	I0120 15:04:58.362309 2137369 main.go:141] libmachine: (addons-823768)   <cpu mode='host-passthrough'>
	I0120 15:04:58.362325 2137369 main.go:141] libmachine: (addons-823768)   
	I0120 15:04:58.362336 2137369 main.go:141] libmachine: (addons-823768)   </cpu>
	I0120 15:04:58.362342 2137369 main.go:141] libmachine: (addons-823768)   <os>
	I0120 15:04:58.362350 2137369 main.go:141] libmachine: (addons-823768)     <type>hvm</type>
	I0120 15:04:58.362356 2137369 main.go:141] libmachine: (addons-823768)     <boot dev='cdrom'/>
	I0120 15:04:58.362363 2137369 main.go:141] libmachine: (addons-823768)     <boot dev='hd'/>
	I0120 15:04:58.362369 2137369 main.go:141] libmachine: (addons-823768)     <bootmenu enable='no'/>
	I0120 15:04:58.362377 2137369 main.go:141] libmachine: (addons-823768)   </os>
	I0120 15:04:58.362382 2137369 main.go:141] libmachine: (addons-823768)   <devices>
	I0120 15:04:58.362388 2137369 main.go:141] libmachine: (addons-823768)     <disk type='file' device='cdrom'>
	I0120 15:04:58.362397 2137369 main.go:141] libmachine: (addons-823768)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/boot2docker.iso'/>
	I0120 15:04:58.362405 2137369 main.go:141] libmachine: (addons-823768)       <target dev='hdc' bus='scsi'/>
	I0120 15:04:58.362411 2137369 main.go:141] libmachine: (addons-823768)       <readonly/>
	I0120 15:04:58.362418 2137369 main.go:141] libmachine: (addons-823768)     </disk>
	I0120 15:04:58.362432 2137369 main.go:141] libmachine: (addons-823768)     <disk type='file' device='disk'>
	I0120 15:04:58.362442 2137369 main.go:141] libmachine: (addons-823768)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 15:04:58.362450 2137369 main.go:141] libmachine: (addons-823768)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk'/>
	I0120 15:04:58.362458 2137369 main.go:141] libmachine: (addons-823768)       <target dev='hda' bus='virtio'/>
	I0120 15:04:58.362463 2137369 main.go:141] libmachine: (addons-823768)     </disk>
	I0120 15:04:58.362471 2137369 main.go:141] libmachine: (addons-823768)     <interface type='network'>
	I0120 15:04:58.362491 2137369 main.go:141] libmachine: (addons-823768)       <source network='mk-addons-823768'/>
	I0120 15:04:58.362504 2137369 main.go:141] libmachine: (addons-823768)       <model type='virtio'/>
	I0120 15:04:58.362509 2137369 main.go:141] libmachine: (addons-823768)     </interface>
	I0120 15:04:58.362514 2137369 main.go:141] libmachine: (addons-823768)     <interface type='network'>
	I0120 15:04:58.362530 2137369 main.go:141] libmachine: (addons-823768)       <source network='default'/>
	I0120 15:04:58.362537 2137369 main.go:141] libmachine: (addons-823768)       <model type='virtio'/>
	I0120 15:04:58.362542 2137369 main.go:141] libmachine: (addons-823768)     </interface>
	I0120 15:04:58.362547 2137369 main.go:141] libmachine: (addons-823768)     <serial type='pty'>
	I0120 15:04:58.362552 2137369 main.go:141] libmachine: (addons-823768)       <target port='0'/>
	I0120 15:04:58.362558 2137369 main.go:141] libmachine: (addons-823768)     </serial>
	I0120 15:04:58.362565 2137369 main.go:141] libmachine: (addons-823768)     <console type='pty'>
	I0120 15:04:58.362579 2137369 main.go:141] libmachine: (addons-823768)       <target type='serial' port='0'/>
	I0120 15:04:58.362587 2137369 main.go:141] libmachine: (addons-823768)     </console>
	I0120 15:04:58.362594 2137369 main.go:141] libmachine: (addons-823768)     <rng model='virtio'>
	I0120 15:04:58.362642 2137369 main.go:141] libmachine: (addons-823768)       <backend model='random'>/dev/random</backend>
	I0120 15:04:58.362668 2137369 main.go:141] libmachine: (addons-823768)     </rng>
	I0120 15:04:58.362682 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362694 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362704 2137369 main.go:141] libmachine: (addons-823768)   </devices>
	I0120 15:04:58.362716 2137369 main.go:141] libmachine: (addons-823768) </domain>
	I0120 15:04:58.362728 2137369 main.go:141] libmachine: (addons-823768) 
	I0120 15:04:58.367308 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:fe:73:ee in network default
	I0120 15:04:58.367817 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:58.367831 2137369 main.go:141] libmachine: (addons-823768) starting domain...
	I0120 15:04:58.367843 2137369 main.go:141] libmachine: (addons-823768) ensuring networks are active...
	I0120 15:04:58.368477 2137369 main.go:141] libmachine: (addons-823768) Ensuring network default is active
	I0120 15:04:58.368765 2137369 main.go:141] libmachine: (addons-823768) Ensuring network mk-addons-823768 is active
	I0120 15:04:58.369246 2137369 main.go:141] libmachine: (addons-823768) getting domain XML...
	I0120 15:04:58.369915 2137369 main.go:141] libmachine: (addons-823768) creating domain...
	I0120 15:04:59.601024 2137369 main.go:141] libmachine: (addons-823768) waiting for IP...
	I0120 15:04:59.602003 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:59.602406 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:04:59.602487 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.602427 2137391 retry.go:31] will retry after 258.668513ms: waiting for domain to come up
	I0120 15:04:59.863113 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:59.863860 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:04:59.863887 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.863820 2137391 retry.go:31] will retry after 284.943032ms: waiting for domain to come up
	I0120 15:05:00.150387 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:00.150799 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:00.150864 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.150788 2137391 retry.go:31] will retry after 487.888334ms: waiting for domain to come up
	I0120 15:05:00.640607 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:00.641049 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:00.641074 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.640997 2137391 retry.go:31] will retry after 506.402264ms: waiting for domain to come up
	I0120 15:05:01.148692 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:01.149072 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:01.149103 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.149042 2137391 retry.go:31] will retry after 610.710776ms: waiting for domain to come up
	I0120 15:05:01.761084 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:01.761615 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:01.761660 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.761555 2137391 retry.go:31] will retry after 869.953856ms: waiting for domain to come up
	I0120 15:05:02.632849 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:02.633348 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:02.633383 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:02.633307 2137391 retry.go:31] will retry after 878.477724ms: waiting for domain to come up
	I0120 15:05:03.512981 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:03.513483 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:03.513516 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:03.513425 2137391 retry.go:31] will retry after 1.196488457s: waiting for domain to come up
	I0120 15:05:04.711923 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:04.712468 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:04.712555 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:04.712444 2137391 retry.go:31] will retry after 1.238217465s: waiting for domain to come up
	I0120 15:05:05.952338 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:05.952718 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:05.952767 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:05.952682 2137391 retry.go:31] will retry after 1.963992606s: waiting for domain to come up
	I0120 15:05:07.919115 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:07.919614 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:07.919688 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:07.919591 2137391 retry.go:31] will retry after 2.598377206s: waiting for domain to come up
	I0120 15:05:10.519561 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:10.519995 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:10.520062 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:10.519979 2137391 retry.go:31] will retry after 2.387749397s: waiting for domain to come up
	I0120 15:05:12.909148 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:12.909462 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:12.909482 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:12.909426 2137391 retry.go:31] will retry after 3.566319877s: waiting for domain to come up
	I0120 15:05:16.480251 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:16.480589 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:16.480632 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:16.480539 2137391 retry.go:31] will retry after 5.139483327s: waiting for domain to come up
	I0120 15:05:21.624584 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.625210 2137369 main.go:141] libmachine: (addons-823768) found domain IP: 192.168.39.158
	I0120 15:05:21.625248 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has current primary IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.625255 2137369 main.go:141] libmachine: (addons-823768) reserving static IP address...
	I0120 15:05:21.625737 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find host DHCP lease matching {name: "addons-823768", mac: "52:54:00:25:8d:22", ip: "192.168.39.158"} in network mk-addons-823768
	I0120 15:05:21.704346 2137369 main.go:141] libmachine: (addons-823768) DBG | Getting to WaitForSSH function...
	I0120 15:05:21.704393 2137369 main.go:141] libmachine: (addons-823768) reserved static IP address 192.168.39.158 for domain addons-823768
	I0120 15:05:21.704447 2137369 main.go:141] libmachine: (addons-823768) waiting for SSH...
	I0120 15:05:21.707052 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.707627 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.707662 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.707819 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH client type: external
	I0120 15:05:21.707849 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa (-rw-------)
	I0120 15:05:21.707888 2137369 main.go:141] libmachine: (addons-823768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 15:05:21.707907 2137369 main.go:141] libmachine: (addons-823768) DBG | About to run SSH command:
	I0120 15:05:21.707924 2137369 main.go:141] libmachine: (addons-823768) DBG | exit 0
	I0120 15:05:21.831180 2137369 main.go:141] libmachine: (addons-823768) DBG | SSH cmd err, output: <nil>: 
	I0120 15:05:21.831428 2137369 main.go:141] libmachine: (addons-823768) KVM machine creation complete
	I0120 15:05:21.831824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:05:21.832433 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:21.832624 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:21.832787 2137369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 15:05:21.832803 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:21.834150 2137369 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 15:05:21.834163 2137369 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 15:05:21.834169 2137369 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 15:05:21.834174 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:21.836638 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.836979 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.837011 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.837216 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:21.837461 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.837656 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.837855 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:21.838060 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:21.838317 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:21.838332 2137369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 15:05:21.938133 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 15:05:21.938165 2137369 main.go:141] libmachine: Detecting the provisioner...
	I0120 15:05:21.938176 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:21.941079 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.941442 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.941472 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.941599 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:21.941824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.942016 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.942197 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:21.942359 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:21.942538 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:21.942550 2137369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 15:05:22.044310 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 15:05:22.044405 2137369 main.go:141] libmachine: found compatible host: buildroot
	I0120 15:05:22.044421 2137369 main.go:141] libmachine: Provisioning with buildroot...
	I0120 15:05:22.044435 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.044699 2137369 buildroot.go:166] provisioning hostname "addons-823768"
	I0120 15:05:22.044733 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.044923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.047943 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.048353 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.048374 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.048517 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.048723 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.048877 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.048970 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.049121 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.049312 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.049324 2137369 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-823768 && echo "addons-823768" | sudo tee /etc/hostname
	I0120 15:05:22.166123 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823768
	
	I0120 15:05:22.166193 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.169246 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.169621 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.169659 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.169836 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.170038 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.170186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.170305 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.170495 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.170736 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.170762 2137369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-823768' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823768/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-823768' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 15:05:22.280555 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 15:05:22.280595 2137369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 15:05:22.280622 2137369 buildroot.go:174] setting up certificates
	I0120 15:05:22.280638 2137369 provision.go:84] configureAuth start
	I0120 15:05:22.280654 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.281026 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:22.283951 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.284335 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.284358 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.284533 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.286813 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.287192 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.287215 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.287344 2137369 provision.go:143] copyHostCerts
	I0120 15:05:22.287426 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 15:05:22.287580 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 15:05:22.287682 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 15:05:22.287769 2137369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.addons-823768 san=[127.0.0.1 192.168.39.158 addons-823768 localhost minikube]
	I0120 15:05:22.401850 2137369 provision.go:177] copyRemoteCerts
	I0120 15:05:22.401946 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 15:05:22.401974 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.405186 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.405681 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.405710 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.405977 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.406213 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.406368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.406524 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:22.489134 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 15:05:22.514579 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 15:05:22.539697 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 15:05:22.564884 2137369 provision.go:87] duration metric: took 284.22466ms to configureAuth
	I0120 15:05:22.564927 2137369 buildroot.go:189] setting minikube options for container-runtime
	I0120 15:05:22.565156 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:22.565249 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.568228 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.568661 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.568706 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.568801 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.569007 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.569179 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.569341 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.569501 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.569699 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.569716 2137369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 15:05:22.802503 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 15:05:22.802531 2137369 main.go:141] libmachine: Checking connection to Docker...
	I0120 15:05:22.802540 2137369 main.go:141] libmachine: (addons-823768) Calling .GetURL
	I0120 15:05:22.803962 2137369 main.go:141] libmachine: (addons-823768) DBG | using libvirt version 6000000
	I0120 15:05:22.806234 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.806594 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.806655 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.806814 2137369 main.go:141] libmachine: Docker is up and running!
	I0120 15:05:22.806829 2137369 main.go:141] libmachine: Reticulating splines...
	I0120 15:05:22.806837 2137369 client.go:171] duration metric: took 25.088726295s to LocalClient.Create
	I0120 15:05:22.806864 2137369 start.go:167] duration metric: took 25.088792622s to libmachine.API.Create "addons-823768"
	I0120 15:05:22.806874 2137369 start.go:293] postStartSetup for "addons-823768" (driver="kvm2")
	I0120 15:05:22.806886 2137369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 15:05:22.806906 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:22.807197 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 15:05:22.807222 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.809507 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.809856 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.809877 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.810074 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.810283 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.810491 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.810686 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:22.893410 2137369 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 15:05:22.897799 2137369 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 15:05:22.897835 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 15:05:22.897908 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 15:05:22.897935 2137369 start.go:296] duration metric: took 91.053195ms for postStartSetup
	I0120 15:05:22.897999 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:05:22.898651 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:22.902713 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.903149 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.903182 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.903416 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
	I0120 15:05:22.903615 2137369 start.go:128] duration metric: took 25.205644985s to createHost
	I0120 15:05:22.903638 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.905563 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.905853 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.905900 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.905949 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.906149 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.906296 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.906429 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.906664 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.906868 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.906880 2137369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 15:05:23.008106 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737385522.980460970
	
	I0120 15:05:23.008135 2137369 fix.go:216] guest clock: 1737385522.980460970
	I0120 15:05:23.008143 2137369 fix.go:229] Guest: 2025-01-20 15:05:22.98046097 +0000 UTC Remote: 2025-01-20 15:05:22.903626964 +0000 UTC m=+25.320898969 (delta=76.834006ms)
	I0120 15:05:23.008215 2137369 fix.go:200] guest clock delta is within tolerance: 76.834006ms
	I0120 15:05:23.008230 2137369 start.go:83] releasing machines lock for "addons-823768", held for 25.310337319s
	I0120 15:05:23.008265 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.008613 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:23.011490 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.011849 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.011878 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.012093 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012681 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012869 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012984 2137369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 15:05:23.013034 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:23.013163 2137369 ssh_runner.go:195] Run: cat /version.json
	I0120 15:05:23.013186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:23.015959 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016170 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016408 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.016434 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016609 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:23.016700 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.016732 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016845 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:23.016912 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:23.016984 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:23.017055 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:23.017119 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:23.017164 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:23.017332 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:23.091913 2137369 ssh_runner.go:195] Run: systemctl --version
	I0120 15:05:23.122269 2137369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 15:05:23.875612 2137369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 15:05:23.882266 2137369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 15:05:23.882347 2137369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 15:05:23.900478 2137369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 15:05:23.900506 2137369 start.go:495] detecting cgroup driver to use...
	I0120 15:05:23.900575 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 15:05:23.918752 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 15:05:23.934434 2137369 docker.go:217] disabling cri-docker service (if available) ...
	I0120 15:05:23.934503 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 15:05:23.948970 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 15:05:23.963860 2137369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 15:05:24.085254 2137369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 15:05:24.229859 2137369 docker.go:233] disabling docker service ...
	I0120 15:05:24.229956 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 15:05:24.245938 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 15:05:24.260809 2137369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 15:05:24.396969 2137369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 15:05:24.518925 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 15:05:24.534100 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 15:05:24.553792 2137369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 15:05:24.553860 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.565579 2137369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 15:05:24.565658 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.577482 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.589471 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.601410 2137369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 15:05:24.613467 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.624780 2137369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.643556 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.655973 2137369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 15:05:24.666889 2137369 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 15:05:24.666993 2137369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 15:05:24.681872 2137369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 15:05:24.692833 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:24.816424 2137369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 15:05:24.916890 2137369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 15:05:24.917033 2137369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 15:05:24.922124 2137369 start.go:563] Will wait 60s for crictl version
	I0120 15:05:24.922223 2137369 ssh_runner.go:195] Run: which crictl
	I0120 15:05:24.926492 2137369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 15:05:24.966056 2137369 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 15:05:24.966165 2137369 ssh_runner.go:195] Run: crio --version
	I0120 15:05:25.000470 2137369 ssh_runner.go:195] Run: crio --version
	I0120 15:05:25.032126 2137369 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 15:05:25.033657 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:25.036578 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:25.037003 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:25.037039 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:25.037400 2137369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 15:05:25.042011 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 15:05:25.055574 2137369 kubeadm.go:883] updating cluster {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 15:05:25.055706 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 15:05:25.055752 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 15:05:25.092416 2137369 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 15:05:25.092490 2137369 ssh_runner.go:195] Run: which lz4
	I0120 15:05:25.096985 2137369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 15:05:25.101643 2137369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 15:05:25.101687 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 15:05:26.559521 2137369 crio.go:462] duration metric: took 1.462632814s to copy over tarball
	I0120 15:05:26.559603 2137369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 15:05:28.881265 2137369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321627399s)
	I0120 15:05:28.881296 2137369 crio.go:469] duration metric: took 2.321738568s to extract the tarball
	I0120 15:05:28.881308 2137369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 15:05:28.923957 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 15:05:28.966345 2137369 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 15:05:28.966375 2137369 cache_images.go:84] Images are preloaded, skipping loading
	I0120 15:05:28.966384 2137369 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.32.0 crio true true} ...
	I0120 15:05:28.966505 2137369 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 15:05:28.966576 2137369 ssh_runner.go:195] Run: crio config
	I0120 15:05:29.027026 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:05:29.027056 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:05:29.027070 2137369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 15:05:29.027106 2137369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823768 NodeName:addons-823768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 15:05:29.027278 2137369 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-823768"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 15:05:29.027360 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 15:05:29.038001 2137369 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 15:05:29.038070 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 15:05:29.048357 2137369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 15:05:29.066394 2137369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 15:05:29.083817 2137369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 15:05:29.101973 2137369 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0120 15:05:29.106193 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 15:05:29.119610 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:29.229096 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 15:05:29.247908 2137369 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768 for IP: 192.168.39.158
	I0120 15:05:29.247938 2137369 certs.go:194] generating shared ca certs ...
	I0120 15:05:29.247962 2137369 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.248133 2137369 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 15:05:29.375528 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt ...
	I0120 15:05:29.375570 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt: {Name:mk95237ca492d6a8873dc0ee527d241251260641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.375788 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key ...
	I0120 15:05:29.375806 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key: {Name:mk2a2005e42e379cc392095c3323349ceaba77a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.375924 2137369 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 15:05:29.506135 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt ...
	I0120 15:05:29.506170 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt: {Name:mkbf86178b27c05eca2541aa5684eb4efb701b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.506350 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key ...
	I0120 15:05:29.506366 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key: {Name:mk482675847c9e92b5693c4a036fdcbdd07762af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.506469 2137369 certs.go:256] generating profile certs ...
	I0120 15:05:29.506569 2137369 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key
	I0120 15:05:29.506591 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt with IP's: []
	I0120 15:05:29.632374 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt ...
	I0120 15:05:29.632424 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: {Name:mk3520768cf7dae31823de6f71890b04241d6376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.632615 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key ...
	I0120 15:05:29.632631 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key: {Name:mk76119af4a5a356e887e3134370f7dc46e58fde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.632737 2137369 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5
	I0120 15:05:29.632764 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0120 15:05:29.770493 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 ...
	I0120 15:05:29.770531 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5: {Name:mk9454cdba7b3006624e137f0bfa7b68d0d57860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.770726 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 ...
	I0120 15:05:29.770744 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5: {Name:mkb77c90195774352d1df405073394964b639a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.770848 2137369 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt
	I0120 15:05:29.770966 2137369 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key
	I0120 15:05:29.771058 2137369 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key
	I0120 15:05:29.771088 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt with IP's: []
	I0120 15:05:29.886204 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt ...
	I0120 15:05:29.886243 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt: {Name:mka25cfa7c2ede2de31741302e198a7540947810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.886431 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key ...
	I0120 15:05:29.886449 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key: {Name:mk8d45c04d3d1bcd97c6423c1861ad369ae8c86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.886681 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 15:05:29.886732 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 15:05:29.886764 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 15:05:29.886800 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 15:05:29.887529 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 15:05:29.920958 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 15:05:29.961136 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 15:05:29.991787 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 15:05:30.017520 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 15:05:30.042540 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 15:05:30.067826 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 15:05:30.093111 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 15:05:30.120801 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 15:05:30.145867 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 15:05:30.163291 2137369 ssh_runner.go:195] Run: openssl version
	I0120 15:05:30.169332 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 15:05:30.180684 2137369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.185690 2137369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.185771 2137369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.192059 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 15:05:30.203678 2137369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 15:05:30.208221 2137369 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 15:05:30.208309 2137369 kubeadm.go:392] StartCluster: {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:05:30.208405 2137369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 15:05:30.208469 2137369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 15:05:30.247022 2137369 cri.go:89] found id: ""
	I0120 15:05:30.247118 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 15:05:30.257748 2137369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 15:05:30.268149 2137369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 15:05:30.279855 2137369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 15:05:30.279881 2137369 kubeadm.go:157] found existing configuration files:
	
	I0120 15:05:30.279930 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 15:05:30.290146 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 15:05:30.290227 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 15:05:30.300670 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 15:05:30.310440 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 15:05:30.310509 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 15:05:30.320924 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 15:05:30.330490 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 15:05:30.330568 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 15:05:30.340525 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 15:05:30.350412 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 15:05:30.350475 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 15:05:30.360454 2137369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 15:05:30.416929 2137369 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 15:05:30.417044 2137369 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 15:05:30.518614 2137369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 15:05:30.518741 2137369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 15:05:30.518916 2137369 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 15:05:30.540333 2137369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 15:05:30.586140 2137369 out.go:235]   - Generating certificates and keys ...
	I0120 15:05:30.586320 2137369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 15:05:30.586423 2137369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 15:05:30.724586 2137369 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 15:05:30.825694 2137369 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 15:05:30.938774 2137369 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 15:05:31.384157 2137369 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 15:05:31.450833 2137369 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 15:05:31.451192 2137369 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0120 15:05:31.753678 2137369 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 15:05:31.753966 2137369 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0120 15:05:31.832258 2137369 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 15:05:32.352824 2137369 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 15:05:32.512677 2137369 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 15:05:32.512862 2137369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 15:05:32.737640 2137369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 15:05:32.934895 2137369 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 15:05:33.168194 2137369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 15:05:33.369097 2137369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 15:05:33.571513 2137369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 15:05:33.572224 2137369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 15:05:33.577165 2137369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 15:05:33.579006 2137369 out.go:235]   - Booting up control plane ...
	I0120 15:05:33.579145 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 15:05:33.579230 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 15:05:33.579530 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 15:05:33.595480 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 15:05:33.603182 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 15:05:33.603401 2137369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 15:05:33.728727 2137369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 15:05:33.728864 2137369 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 15:05:34.245972 2137369 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.580806ms
	I0120 15:05:34.246087 2137369 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 15:05:39.244154 2137369 kubeadm.go:310] [api-check] The API server is healthy after 5.00149055s
	I0120 15:05:39.266303 2137369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 15:05:39.287362 2137369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 15:05:39.321758 2137369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 15:05:39.321956 2137369 kubeadm.go:310] [mark-control-plane] Marking the node addons-823768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 15:05:39.340511 2137369 kubeadm.go:310] [bootstrap-token] Using token: ctmxn9.z3jofwz9r9zooxkk
	I0120 15:05:39.342300 2137369 out.go:235]   - Configuring RBAC rules ...
	I0120 15:05:39.342426 2137369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 15:05:39.356885 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 15:05:39.374011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 15:05:39.378696 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 15:05:39.383011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 15:05:39.388592 2137369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 15:05:39.650709 2137369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 15:05:40.082837 2137369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 15:05:40.650481 2137369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 15:05:40.651438 2137369 kubeadm.go:310] 
	I0120 15:05:40.651502 2137369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 15:05:40.651508 2137369 kubeadm.go:310] 
	I0120 15:05:40.651580 2137369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 15:05:40.651588 2137369 kubeadm.go:310] 
	I0120 15:05:40.651645 2137369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 15:05:40.651750 2137369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 15:05:40.651833 2137369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 15:05:40.651857 2137369 kubeadm.go:310] 
	I0120 15:05:40.651920 2137369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 15:05:40.651928 2137369 kubeadm.go:310] 
	I0120 15:05:40.651964 2137369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 15:05:40.651970 2137369 kubeadm.go:310] 
	I0120 15:05:40.652010 2137369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 15:05:40.652095 2137369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 15:05:40.652198 2137369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 15:05:40.652208 2137369 kubeadm.go:310] 
	I0120 15:05:40.652305 2137369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 15:05:40.652415 2137369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 15:05:40.652426 2137369 kubeadm.go:310] 
	I0120 15:05:40.652542 2137369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
	I0120 15:05:40.652709 2137369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 15:05:40.652749 2137369 kubeadm.go:310] 	--control-plane 
	I0120 15:05:40.652769 2137369 kubeadm.go:310] 
	I0120 15:05:40.652869 2137369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 15:05:40.652877 2137369 kubeadm.go:310] 
	I0120 15:05:40.652965 2137369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
	I0120 15:05:40.653092 2137369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 15:05:40.653919 2137369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 15:05:40.653957 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:05:40.653968 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:05:40.655707 2137369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 15:05:40.657014 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 15:05:40.669371 2137369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 15:05:40.690666 2137369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 15:05:40.690750 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:40.690763 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823768 minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=addons-823768 minikube.k8s.io/primary=true
	I0120 15:05:40.817437 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:40.860939 2137369 ops.go:34] apiserver oom_adj: -16
	I0120 15:05:41.317594 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:41.818178 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:42.318320 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:42.818281 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:43.318223 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:43.818194 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.317755 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.817685 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.939850 2137369 kubeadm.go:1113] duration metric: took 4.249182583s to wait for elevateKubeSystemPrivileges
	I0120 15:05:44.939901 2137369 kubeadm.go:394] duration metric: took 14.731620646s to StartCluster
	I0120 15:05:44.939931 2137369 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:44.940095 2137369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:05:44.940664 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:44.940924 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 15:05:44.940960 2137369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 15:05:44.941029 2137369 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0120 15:05:44.941156 2137369 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823768"
	I0120 15:05:44.941168 2137369 addons.go:69] Setting default-storageclass=true in profile "addons-823768"
	I0120 15:05:44.941185 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823768"
	I0120 15:05:44.941225 2137369 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823768"
	I0120 15:05:44.941240 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:44.941236 2137369 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-823768"
	I0120 15:05:44.941261 2137369 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-823768"
	I0120 15:05:44.941263 2137369 addons.go:69] Setting ingress-dns=true in profile "addons-823768"
	I0120 15:05:44.941263 2137369 addons.go:69] Setting storage-provisioner=true in profile "addons-823768"
	I0120 15:05:44.941268 2137369 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-823768"
	I0120 15:05:44.941235 2137369 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-823768"
	I0120 15:05:44.941309 2137369 addons.go:69] Setting gcp-auth=true in profile "addons-823768"
	I0120 15:05:44.941312 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941245 2137369 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823768"
	I0120 15:05:44.941326 2137369 addons.go:69] Setting volcano=true in profile "addons-823768"
	I0120 15:05:44.941335 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941341 2137369 addons.go:238] Setting addon volcano=true in "addons-823768"
	I0120 15:05:44.941340 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823768"
	I0120 15:05:44.941351 2137369 mustload.go:65] Loading cluster: addons-823768
	I0120 15:05:44.941370 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941519 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:44.941724 2137369 addons.go:69] Setting volumesnapshots=true in profile "addons-823768"
	I0120 15:05:44.941738 2137369 addons.go:238] Setting addon volumesnapshots=true in "addons-823768"
	I0120 15:05:44.941738 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941760 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941762 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941764 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941775 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941254 2137369 addons.go:69] Setting registry=true in profile "addons-823768"
	I0120 15:05:44.941801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941316 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941808 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941818 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941845 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941894 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941803 2137369 addons.go:238] Setting addon registry=true in "addons-823768"
	I0120 15:05:44.941238 2137369 addons.go:69] Setting cloud-spanner=true in profile "addons-823768"
	I0120 15:05:44.941921 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941927 2137369 addons.go:238] Setting addon cloud-spanner=true in "addons-823768"
	I0120 15:05:44.941803 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941300 2137369 addons.go:238] Setting addon ingress-dns=true in "addons-823768"
	I0120 15:05:44.941312 2137369 addons.go:238] Setting addon storage-provisioner=true in "addons-823768"
	I0120 15:05:44.941973 2137369 addons.go:69] Setting ingress=true in profile "addons-823768"
	I0120 15:05:44.941987 2137369 addons.go:69] Setting metrics-server=true in profile "addons-823768"
	I0120 15:05:44.942006 2137369 addons.go:238] Setting addon ingress=true in "addons-823768"
	I0120 15:05:44.942008 2137369 addons.go:238] Setting addon metrics-server=true in "addons-823768"
	I0120 15:05:44.941157 2137369 addons.go:69] Setting yakd=true in profile "addons-823768"
	I0120 15:05:44.942019 2137369 addons.go:69] Setting inspektor-gadget=true in profile "addons-823768"
	I0120 15:05:44.942024 2137369 addons.go:238] Setting addon yakd=true in "addons-823768"
	I0120 15:05:44.942029 2137369 addons.go:238] Setting addon inspektor-gadget=true in "addons-823768"
	I0120 15:05:44.941768 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942160 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942193 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942221 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942248 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942361 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942512 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942664 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942678 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942701 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942711 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942769 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942790 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942867 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943045 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943083 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943132 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943150 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943180 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943246 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943315 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943346 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943438 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943517 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943543 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.944549 2137369 out.go:177] * Verifying Kubernetes components...
	I0120 15:05:44.946093 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:44.959677 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0120 15:05:44.960011 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42423
	I0120 15:05:44.961799 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0120 15:05:44.962224 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0120 15:05:44.975456 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.975521 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.977858 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.977901 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.977974 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.978023 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.979595 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979622 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979676 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979696 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979754 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979765 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979780 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979793 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.980051 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.980728 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.980771 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.981025 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981103 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981151 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981247 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:44.981674 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.981709 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.990376 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.990438 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.992035 2137369 addons.go:238] Setting addon default-storageclass=true in "addons-823768"
	I0120 15:05:44.992096 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.992479 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.992535 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.012533 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
	I0120 15:05:45.013196 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.013883 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.013910 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.014339 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.015001 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.015058 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.015378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0120 15:05:45.015747 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0120 15:05:45.015858 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.016102 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.016315 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.016336 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.016397 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0120 15:05:45.016643 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.016800 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.016927 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.016938 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.017161 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.017666 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.017682 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.018062 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.018681 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.018724 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.018957 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0120 15:05:45.018991 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0120 15:05:45.019076 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0120 15:05:45.019360 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.019429 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020018 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.020059 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.020141 2137369 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-823768"
	I0120 15:05:45.020185 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:45.020270 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020464 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.020478 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.020539 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020546 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.020581 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.020619 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0120 15:05:45.021207 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.021225 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.021347 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0120 15:05:45.021477 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.021943 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.021966 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.022029 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.022276 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.022435 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.022448 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.022804 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.022874 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.023429 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.023466 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.023660 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.023716 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.024288 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.024332 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.024435 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.024590 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.024603 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.025568 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.026786 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0120 15:05:45.028110 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0120 15:05:45.030717 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0120 15:05:45.031353 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.031932 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.031958 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.032368 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.032579 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.032756 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0120 15:05:45.034385 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:45.034810 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.034864 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.035693 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0120 15:05:45.036864 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0120 15:05:45.038073 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0120 15:05:45.039402 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0120 15:05:45.040775 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0120 15:05:45.041455 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0120 15:05:45.041873 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0120 15:05:45.041899 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0120 15:05:45.041928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.043759 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
	I0120 15:05:45.044358 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.045115 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.045137 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.045935 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.045941 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.046009 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.046409 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.046468 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.046483 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.046577 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0120 15:05:45.046798 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.047024 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.047144 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.047300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.047464 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.047824 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0120 15:05:45.047935 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.047955 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.048202 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.048664 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.048738 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.048759 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.049129 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.049227 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.049283 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.049394 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.049412 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.050189 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.050309 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.050835 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.050876 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.052764 2137369 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0120 15:05:45.054269 2137369 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 15:05:45.054297 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0120 15:05:45.054320 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.055060 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0120 15:05:45.058077 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0120 15:05:45.058077 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.058568 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.058679 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.059020 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.059104 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I0120 15:05:45.059253 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.059422 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.059552 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.063381 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.063387 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.063439 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.063467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.063536 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0120 15:05:45.063633 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0120 15:05:45.063727 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064081 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064189 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.064230 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.064315 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064334 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064318 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.064393 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.064400 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0120 15:05:45.064875 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064889 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.064909 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.064977 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.065050 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065064 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065201 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065215 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065388 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065403 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065458 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.065499 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066244 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.066355 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066407 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.066452 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066496 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.067341 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.067385 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.067968 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.068002 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.068008 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.068018 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.068091 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.068266 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.068547 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.068806 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.068847 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.070172 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.070709 2137369 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0120 15:05:45.070818 2137369 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0120 15:05:45.072050 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0120 15:05:45.072074 2137369 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.072056 2137369 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0120 15:05:45.073050 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 15:05:45.073066 2137369 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 15:05:45.073096 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.073987 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0120 15:05:45.074342 2137369 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 15:05:45.074359 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0120 15:05:45.074379 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.077368 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:45.078567 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.079183 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.080151 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:45.080251 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.080282 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.080851 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.081141 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.081161 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.081487 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 15:05:45.081508 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0120 15:05:45.081529 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.081534 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.081928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.081993 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.082052 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.082065 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.082637 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.082689 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.082714 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.083344 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.083414 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.083588 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.084012 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.084190 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.084795 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.085120 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.085772 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.085809 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.085817 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.085975 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.086122 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.086288 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.086641 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0120 15:05:45.090566 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.091280 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.091307 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.091792 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.091991 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.093777 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.094693 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0120 15:05:45.095349 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.095731 2137369 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0120 15:05:45.095944 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.095969 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.096365 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.096584 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.096879 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0120 15:05:45.096896 2137369 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0120 15:05:45.096918 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.098679 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.098923 2137369 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 15:05:45.098946 2137369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 15:05:45.098964 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.099552 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0120 15:05:45.100021 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.100586 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.100612 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.100957 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.101159 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.101709 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 15:05:45.102248 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.102752 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.102794 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0120 15:05:45.102845 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.102863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.102927 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.103224 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.103297 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.103391 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.103406 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.103603 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.103859 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.103886 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.103960 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.104015 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.104017 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.104033 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.104060 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.104106 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.104559 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.104625 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.104773 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.104831 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.104961 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.105289 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.105873 2137369 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0120 15:05:45.105892 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.106638 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.108265 2137369 out.go:177]   - Using image docker.io/busybox:stable
	I0120 15:05:45.108267 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0120 15:05:45.109934 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 15:05:45.109962 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0120 15:05:45.109986 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.109937 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 15:05:45.110047 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0120 15:05:45.110061 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.110071 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0120 15:05:45.110709 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.110716 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0120 15:05:45.111211 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.111237 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.111479 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.116371 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116407 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.116419 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.116428 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0120 15:05:45.116376 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116503 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.116539 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.116542 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.116568 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116736 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.116754 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.116795 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.117290 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.117300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.117478 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.117505 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.117648 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.118043 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.118047 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.118553 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.118863 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.118882 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.119027 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.119310 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.119541 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.119616 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.120800 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0120 15:05:45.121412 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.122194 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0120 15:05:45.122217 2137369 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0120 15:05:45.122238 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.122255 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.122572 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:45.122640 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:45.122951 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:45.122971 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 15:05:45.123063 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:45.123075 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:45.123096 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:45.123119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:45.123485 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:45.123498 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 15:05:45.123579 2137369 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0120 15:05:45.124438 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 15:05:45.124457 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 15:05:45.124477 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.126513 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.126830 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.126866 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.127075 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.127259 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.127582 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.127743 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.128516 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0120 15:05:45.129042 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.129062 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.129786 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.129811 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.129822 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.129863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.130021 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.130296 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.130295 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.130504 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.130640 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.130691 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.131987 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0120 15:05:45.132360 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.132609 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.133420 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.133445 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.133859 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.134118 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.134354 2137369 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0120 15:05:45.135691 2137369 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0120 15:05:45.135713 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0120 15:05:45.135737 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.135899 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.137430 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0120 15:05:45.138646 2137369 out.go:177]   - Using image docker.io/registry:2.8.3
	I0120 15:05:45.138920 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.139336 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.139350 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.139552 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.139760 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.139903 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0120 15:05:45.139927 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0120 15:05:45.139947 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.139913 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.140150 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	W0120 15:05:45.141472 2137369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
	I0120 15:05:45.141659 2137369 retry.go:31] will retry after 248.832256ms: ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
	I0120 15:05:45.143344 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.143825 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.143969 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.144003 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.144223 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.144424 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.144580 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.417776 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 15:05:45.471994 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 15:05:45.472017 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 15:05:45.489674 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 15:05:45.526555 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 15:05:45.527835 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0120 15:05:45.527865 2137369 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0120 15:05:45.550223 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0120 15:05:45.550256 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0120 15:05:45.593768 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 15:05:45.603223 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 15:05:45.617896 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 15:05:45.640784 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0120 15:05:45.640819 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0120 15:05:45.663716 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0120 15:05:45.663743 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0120 15:05:45.677268 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0120 15:05:45.677311 2137369 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0120 15:05:45.703833 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 15:05:45.703861 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0120 15:05:45.714450 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 15:05:45.755610 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0120 15:05:45.755638 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0120 15:05:45.790857 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0120 15:05:45.790881 2137369 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0120 15:05:45.845887 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0120 15:05:45.887294 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0120 15:05:45.887977 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0120 15:05:45.888000 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0120 15:05:45.924864 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 15:05:45.924896 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 15:05:45.925761 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0120 15:05:45.925784 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0120 15:05:45.937497 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0120 15:05:45.937531 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0120 15:05:46.025842 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0120 15:05:46.025879 2137369 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0120 15:05:46.113217 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0120 15:05:46.142184 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 15:05:46.142236 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 15:05:46.196187 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0120 15:05:46.196215 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0120 15:05:46.211841 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0120 15:05:46.211883 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0120 15:05:46.260854 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0120 15:05:46.260889 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0120 15:05:46.349717 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 15:05:46.363946 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0120 15:05:46.363982 2137369 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0120 15:05:46.531940 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0120 15:05:46.531972 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0120 15:05:46.676731 2137369 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:46.676761 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0120 15:05:46.699767 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0120 15:05:46.911967 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0120 15:05:46.912002 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0120 15:05:47.094846 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:47.136150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.718325287s)
	I0120 15:05:47.136232 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:47.136254 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:47.136602 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:47.136623 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:47.136638 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:47.136742 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:47.137159 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:47.137183 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:47.256792 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0120 15:05:47.256827 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0120 15:05:47.600570 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0120 15:05:47.600599 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0120 15:05:48.025069 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0120 15:05:48.025100 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0120 15:05:48.180094 2137369 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.708049921s)
	I0120 15:05:48.180159 2137369 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.708105198s)
	I0120 15:05:48.180191 2137369 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 15:05:48.180283 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.690575165s)
	I0120 15:05:48.180339 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180353 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180355 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.653759394s)
	I0120 15:05:48.180401 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180419 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180669 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.180685 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.180696 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180703 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180826 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.180904 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.180930 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.181183 2137369 node_ready.go:35] waiting up to 6m0s for node "addons-823768" to be "Ready" ...
	I0120 15:05:48.181456 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.181473 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.181482 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.181494 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.182413 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.182432 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.182430 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.193850 2137369 node_ready.go:49] node "addons-823768" has status "Ready":"True"
	I0120 15:05:48.193881 2137369 node_ready.go:38] duration metric: took 12.636766ms for node "addons-823768" to be "Ready" ...
	I0120 15:05:48.193893 2137369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 15:05:48.246992 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.247119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.247468 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.247530 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.247542 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.259232 2137369 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
	I0120 15:05:48.283051 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 15:05:48.283087 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0120 15:05:48.686812 2137369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823768" context rescaled to 1 replicas
	I0120 15:05:48.755354 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 15:05:50.506941 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:51.355199 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.761376681s)
	I0120 15:05:51.355294 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.355314 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.355667 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.355754 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.355778 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.355801 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.355813 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.356189 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.356205 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.457453 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.457488 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.457937 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.458005 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.458029 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.537693 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.934416648s)
	I0120 15:05:51.537784 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.537799 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.538261 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.538287 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.538298 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.538307 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.538535 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.538558 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.538576 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.939586 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0120 15:05:51.939639 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:51.943517 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:51.944138 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:51.944174 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:51.944392 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:51.944662 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:51.944863 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:51.945029 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:52.359222 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0120 15:05:52.485709 2137369 addons.go:238] Setting addon gcp-auth=true in "addons-823768"
	I0120 15:05:52.485795 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:52.486338 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:52.486410 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:52.503565 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0120 15:05:52.504038 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:52.504670 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:52.504702 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:52.505075 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:52.505679 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:52.505728 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:52.521951 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0120 15:05:52.522548 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:52.523148 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:52.523181 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:52.523646 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:52.523933 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:52.526028 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:52.526329 2137369 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0120 15:05:52.526368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:52.529896 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:52.530491 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:52.530534 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:52.530704 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:52.530923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:52.531085 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:52.531247 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:52.803206 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:53.218889 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.600943277s)
	I0120 15:05:53.218965 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.218981 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.218975 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.504485364s)
	I0120 15:05:53.219043 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.373117907s)
	I0120 15:05:53.219086 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219102 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219059 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.331817088s)
	I0120 15:05:53.219162 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219185 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219205 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219246 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.105968154s)
	I0120 15:05:53.219283 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219298 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219406 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.869651037s)
	I0120 15:05:53.219441 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219452 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219549 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.519750539s)
	I0120 15:05:53.219567 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219576 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219695 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.124796876s)
	W0120 15:05:53.219739 2137369 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 15:05:53.219749 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219750 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219761 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219774 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219784 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219786 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219802 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219784 2137369 retry.go:31] will retry after 298.07171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 15:05:53.219827 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219835 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219830 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219847 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219851 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219856 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219857 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219861 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219868 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219885 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219885 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219896 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219905 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219868 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219912 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219916 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219931 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219940 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219909 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219947 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219953 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.220005 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.220026 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.220032 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.220039 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.220045 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.220117 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.220129 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.220211 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.220226 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.221944 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222004 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222013 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222022 2137369 addons.go:479] Verifying addon ingress=true in "addons-823768"
	I0120 15:05:53.222034 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222059 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222065 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222071 2137369 addons.go:479] Verifying addon registry=true in "addons-823768"
	I0120 15:05:53.222245 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222266 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222270 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222283 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222298 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222513 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222673 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222687 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.223993 2137369 out.go:177] * Verifying ingress addon...
	I0120 15:05:53.224095 2137369 out.go:177] * Verifying registry addon...
	I0120 15:05:53.224118 2137369 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-823768 service yakd-dashboard -n yakd-dashboard
	
	I0120 15:05:53.225572 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.225592 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.225602 2137369 addons.go:479] Verifying addon metrics-server=true in "addons-823768"
	I0120 15:05:53.225606 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.226182 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0120 15:05:53.226205 2137369 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0120 15:05:53.266543 2137369 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0120 15:05:53.266570 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:53.270246 2137369 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 15:05:53.270273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:53.518952 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:53.732321 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:53.733564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.236310 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:54.238382 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.734271 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.734269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.254274 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.254786 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:55.344689 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:55.500757 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.745335252s)
	I0120 15:05:55.500816 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:55.500835 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:55.500856 2137369 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.974495908s)
	I0120 15:05:55.501209 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:55.501237 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:55.501248 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:55.501260 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:55.501495 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:55.501519 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:55.501544 2137369 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-823768"
	I0120 15:05:55.502730 2137369 out.go:177] * Verifying csi-hostpath-driver addon...
	I0120 15:05:55.502734 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:55.504940 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0120 15:05:55.505554 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0120 15:05:55.506259 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0120 15:05:55.506279 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0120 15:05:55.575714 2137369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 15:05:55.575753 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:55.671030 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0120 15:05:55.671060 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0120 15:05:55.724863 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 15:05:55.724895 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0120 15:05:55.730623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.733038 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:55.782983 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 15:05:56.011925 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:56.234678 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:56.234948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:56.512779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:56.731710 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:56.731852 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:56.797148 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.278133853s)
	I0120 15:05:56.797216 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:56.797235 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:56.797528 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:56.797547 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:56.797556 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:56.797563 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:56.797791 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:56.797809 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:56.797822 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.010804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:57.233033 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:57.233289 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:57.542025 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.758981808s)
	I0120 15:05:57.542087 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:57.542105 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:57.542525 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:57.542544 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:57.542553 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:57.542551 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.542560 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:57.542800 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.542813 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:57.542824 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:57.543638 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:57.544206 2137369 addons.go:479] Verifying addon gcp-auth=true in "addons-823768"
	I0120 15:05:57.546233 2137369 out.go:177] * Verifying gcp-auth addon...
	I0120 15:05:57.548167 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0120 15:05:57.608035 2137369 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0120 15:05:57.608063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:57.757556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:57.758358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:57.801308 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:58.017964 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:58.054047 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:58.232134 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:58.232349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:58.511487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:58.552181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:58.732397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:58.732613 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:59.009728 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:59.052751 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:59.232207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:59.232938 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:59.511390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:59.552558 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:59.732339 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:59.733137 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:00.011192 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:00.052027 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:00.230588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:00.230983 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:00.265616 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:00.512889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:00.553312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:00.731731 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:00.732346 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:01.010060 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:01.052209 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:01.230828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:01.231470 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:01.535707 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:01.552390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:01.731636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:01.732100 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.011594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:02.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:02.231481 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:02.512089 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:02.552231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:02.732082 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:02.733140 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.765831 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:03.010323 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:03.051618 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:03.232259 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:03.232419 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:03.511102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:03.552477 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:03.731997 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:03.732052 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.012261 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:04.052901 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:04.231169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:04.231374 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.659683 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:04.660505 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:04.731959 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:04.732234 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.767143 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:05.010349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:05.051978 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:05.231135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:05.231273 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:05.512111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:05.552863 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:05.732208 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:05.732591 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.011307 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:06.052476 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:06.232498 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.233241 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:06.510827 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:06.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:06.981709 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:06.986582 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.990553 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:07.011207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:07.052592 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:07.231024 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:07.231680 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:07.511630 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:07.551889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:07.731928 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:07.732524 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.011481 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:08.051943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:08.232305 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:08.232698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.510640 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:08.552388 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:08.730939 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.733311 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:09.011242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:09.052916 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:09.231309 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:09.232010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:09.266856 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:09.513742 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:09.551779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:09.730962 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:09.731230 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.010833 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:10.051663 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:10.231411 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:10.232988 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.511809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:10.552186 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:10.732270 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.733127 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.347386 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:11.359078 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:11.446014 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:11.446575 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:11.446659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.547362 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:11.554123 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:11.732018 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.732027 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.011787 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:12.051888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:12.233138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:12.233471 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.511105 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:12.552912 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:12.731592 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.733254 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:12.765446 2137369 pod_ready.go:93] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.765473 2137369 pod_ready.go:82] duration metric: took 24.50620135s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.765484 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.772118 2137369 pod_ready.go:93] pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.772143 2137369 pod_ready.go:82] duration metric: took 6.652598ms for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.772152 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.774084 2137369 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
	I0120 15:06:12.774108 2137369 pod_ready.go:82] duration metric: took 1.950369ms for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
	E0120 15:06:12.774119 2137369 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
	I0120 15:06:12.774125 2137369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.779574 2137369 pod_ready.go:93] pod "etcd-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.779594 2137369 pod_ready.go:82] duration metric: took 5.463343ms for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.779604 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.786673 2137369 pod_ready.go:93] pod "kube-apiserver-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.786695 2137369 pod_ready.go:82] duration metric: took 7.084094ms for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.786705 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.964107 2137369 pod_ready.go:93] pod "kube-controller-manager-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.964143 2137369 pod_ready.go:82] duration metric: took 177.429563ms for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.964159 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.010809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:13.052318 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:13.231805 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:13.232197 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:13.364952 2137369 pod_ready.go:93] pod "kube-proxy-7rvmm" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:13.364991 2137369 pod_ready.go:82] duration metric: took 400.822729ms for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.365008 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.510667 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:13.551664 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:13.732398 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:13.733063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:13.763469 2137369 pod_ready.go:93] pod "kube-scheduler-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:13.763497 2137369 pod_ready.go:82] duration metric: took 398.480559ms for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.763510 2137369 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:14.011840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:14.052929 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:14.164972 2137369 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:14.165004 2137369 pod_ready.go:82] duration metric: took 401.486108ms for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:14.165013 2137369 pod_ready.go:39] duration metric: took 25.971110211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 15:06:14.165032 2137369 api_server.go:52] waiting for apiserver process to appear ...
	I0120 15:06:14.165104 2137369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:06:14.200902 2137369 api_server.go:72] duration metric: took 29.259888219s to wait for apiserver process to appear ...
	I0120 15:06:14.200940 2137369 api_server.go:88] waiting for apiserver healthz status ...
	I0120 15:06:14.200966 2137369 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0120 15:06:14.206516 2137369 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0120 15:06:14.207753 2137369 api_server.go:141] control plane version: v1.32.0
	I0120 15:06:14.207791 2137369 api_server.go:131] duration metric: took 6.841209ms to wait for apiserver health ...
	I0120 15:06:14.207804 2137369 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 15:06:14.233265 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:14.234965 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:14.370097 2137369 system_pods.go:59] 18 kube-system pods found
	I0120 15:06:14.370150 2137369 system_pods.go:61] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
	I0120 15:06:14.370159 2137369 system_pods.go:61] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
	I0120 15:06:14.370170 2137369 system_pods.go:61] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 15:06:14.370182 2137369 system_pods.go:61] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 15:06:14.370193 2137369 system_pods.go:61] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 15:06:14.370201 2137369 system_pods.go:61] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
	I0120 15:06:14.370206 2137369 system_pods.go:61] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
	I0120 15:06:14.370212 2137369 system_pods.go:61] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
	I0120 15:06:14.370220 2137369 system_pods.go:61] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0120 15:06:14.370228 2137369 system_pods.go:61] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
	I0120 15:06:14.370235 2137369 system_pods.go:61] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
	I0120 15:06:14.370244 2137369 system_pods.go:61] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 15:06:14.370253 2137369 system_pods.go:61] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
	I0120 15:06:14.370263 2137369 system_pods.go:61] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 15:06:14.370271 2137369 system_pods.go:61] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 15:06:14.370303 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.370312 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.370317 2137369 system_pods.go:61] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
	I0120 15:06:14.370328 2137369 system_pods.go:74] duration metric: took 162.516641ms to wait for pod list to return data ...
	I0120 15:06:14.370343 2137369 default_sa.go:34] waiting for default service account to be created ...
	I0120 15:06:14.509778 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:14.552297 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:14.563348 2137369 default_sa.go:45] found service account: "default"
	I0120 15:06:14.563381 2137369 default_sa.go:55] duration metric: took 193.030729ms for default service account to be created ...
	I0120 15:06:14.563393 2137369 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 15:06:14.730162 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:14.730276 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:14.769176 2137369 system_pods.go:87] 18 kube-system pods found
	I0120 15:06:14.964028 2137369 system_pods.go:105] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
	I0120 15:06:14.964091 2137369 system_pods.go:105] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
	I0120 15:06:14.964101 2137369 system_pods.go:105] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 15:06:14.964108 2137369 system_pods.go:105] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 15:06:14.964121 2137369 system_pods.go:105] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 15:06:14.964126 2137369 system_pods.go:105] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
	I0120 15:06:14.964133 2137369 system_pods.go:105] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
	I0120 15:06:14.964141 2137369 system_pods.go:105] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
	I0120 15:06:14.964148 2137369 system_pods.go:105] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0120 15:06:14.964153 2137369 system_pods.go:105] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
	I0120 15:06:14.964160 2137369 system_pods.go:105] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
	I0120 15:06:14.964166 2137369 system_pods.go:105] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 15:06:14.964173 2137369 system_pods.go:105] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
	I0120 15:06:14.964180 2137369 system_pods.go:105] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 15:06:14.964186 2137369 system_pods.go:105] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 15:06:14.964197 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.964205 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.964210 2137369 system_pods.go:105] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
	I0120 15:06:14.964220 2137369 system_pods.go:147] duration metric: took 400.820113ms to wait for k8s-apps to be running ...
	I0120 15:06:14.964230 2137369 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 15:06:14.964284 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:06:15.004824 2137369 system_svc.go:56] duration metric: took 40.572241ms WaitForService to wait for kubelet
	I0120 15:06:15.004866 2137369 kubeadm.go:582] duration metric: took 30.063861442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 15:06:15.004901 2137369 node_conditions.go:102] verifying NodePressure condition ...
	I0120 15:06:15.009936 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:15.052242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:15.164145 2137369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 15:06:15.164177 2137369 node_conditions.go:123] node cpu capacity is 2
	I0120 15:06:15.164191 2137369 node_conditions.go:105] duration metric: took 159.284808ms to run NodePressure ...
	I0120 15:06:15.164204 2137369 start.go:241] waiting for startup goroutines ...
	I0120 15:06:15.230651 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:15.230956 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:15.510392 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:15.552212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:15.732107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:15.732654 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:16.010121 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:16.053180 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:16.232364 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:16.232798 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:16.511718 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:16.552275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:16.731858 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:16.732386 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:17.010874 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:17.051623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:17.231000 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:17.232412 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:17.510781 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:17.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:17.733072 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:17.733322 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:18.010148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:18.051291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:18.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:18.232743 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:18.512092 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:18.552325 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:18.731432 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:18.731830 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:19.279723 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:19.279804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:19.280003 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:19.280489 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:19.510898 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:19.552506 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:19.730594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:19.731187 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:20.010579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:20.052401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:20.230980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:20.231222 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:20.510335 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:20.551579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:20.731061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:20.731252 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:21.010169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:21.052654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:21.229930 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:21.230449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:21.510623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:21.552046 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:21.731181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:21.731380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:22.011222 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:22.052269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:22.231033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:22.232123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:22.510785 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:22.610273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:22.731847 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:22.732017 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:23.010161 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:23.051461 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:23.232240 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:23.232266 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:23.511226 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:23.552179 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:23.732405 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:23.732643 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:24.010952 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:24.052795 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:24.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:24.231982 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:24.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:24.551620 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:24.730311 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:24.730951 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.011086 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:25.051840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:25.236485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:25.237121 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.513580 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:25.551665 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:25.744969 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.745049 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:26.014786 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:26.054803 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:26.240066 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:26.240329 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:26.510110 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:26.552530 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:26.737921 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:26.743487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.013212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:27.055873 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:27.231178 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.233505 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:27.512769 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:27.551845 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:27.731474 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.731923 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:28.010313 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:28.052515 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:28.231843 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:28.232624 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:28.511885 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:28.552260 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:28.732295 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:28.732302 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:29.012191 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:29.052216 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:29.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:29.232716 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:29.511140 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:29.662141 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:29.737287 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:29.737522 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:30.011355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:30.051923 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:30.231542 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:30.232918 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:30.511333 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:30.552399 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:30.731397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:30.731994 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:31.010820 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:31.052300 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:31.232512 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:31.232915 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:31.511124 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:31.552129 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:31.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:31.732929 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.010958 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:32.052413 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:32.232661 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:32.232713 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.512609 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:32.551853 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:32.731496 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:33.012493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:33.051613 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:33.230151 2137369 kapi.go:107] duration metric: took 40.003969564s to wait for kubernetes.io/minikube-addons=registry ...
	I0120 15:06:33.231162 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:33.511111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:33.552356 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:33.731499 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:34.013686 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:34.052825 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:34.231068 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:34.511033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:34.552588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:34.730166 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:35.031945 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:35.061605 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:35.234449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:35.510502 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:35.559244 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:35.731057 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:36.010493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:36.051808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:36.232380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:36.509698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:36.552621 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:36.745681 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:37.010808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:37.052525 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:37.230332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:37.520327 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:37.551800 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:37.881777 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:38.009937 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:38.052520 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:38.230366 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:38.511231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:38.551738 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:38.731132 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:39.010276 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:39.109985 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:39.231136 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:39.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:39.552736 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:39.730944 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:40.011296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:40.052529 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:40.231123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:40.511777 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:40.551973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:40.731032 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:41.010973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:41.052526 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:41.231947 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:41.512073 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:41.552896 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:41.731178 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:42.010888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:42.052437 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:42.231031 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:42.511849 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:42.552177 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:42.731349 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:43.010828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:43.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:43.230567 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:43.512691 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:43.552341 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:43.731593 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:44.249538 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:44.250135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:44.250242 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:44.511995 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:44.553372 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:44.730853 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:45.011136 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:45.051417 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:45.230955 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:45.510980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:45.553271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:45.730966 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:46.011246 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:46.051775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:46.230803 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:46.510401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:46.552603 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:46.731699 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:47.011501 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:47.052513 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:47.232159 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:47.511022 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:47.553343 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:47.732640 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:48.013715 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:48.053087 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:48.232696 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:48.511319 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:48.555075 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:48.732744 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:49.023367 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:49.057856 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:49.230358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:49.512138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:49.552102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:49.732022 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:50.011032 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:50.052346 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:50.232198 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:50.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:50.551759 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:50.730597 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:51.010485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:51.052574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:51.231216 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:51.510010 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:51.552151 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:51.731248 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:52.009643 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:52.057120 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:52.231103 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:52.514636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:52.553051 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:52.732413 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:53.010980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:53.052832 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:53.642329 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:53.643036 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:53.650490 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:53.731286 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:54.013633 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:54.113275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:54.231589 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:54.511224 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:54.552851 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:54.730371 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:55.010217 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:55.051884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:55.231352 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:55.517291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:55.616101 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:55.731154 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:56.010679 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:56.051666 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:56.231039 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:56.512038 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:56.554197 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:56.734633 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:57.011271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:57.052479 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:57.229871 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:57.511654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:57.551574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:57.730415 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:58.010561 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:58.052189 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:58.231332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:58.511608 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:58.554449 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:58.738948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:59.011428 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:59.051900 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:59.240098 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:59.530091 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:59.559468 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:59.734879 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:00.010375 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:00.051846 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:00.231614 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:00.511807 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:00.552559 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:00.731087 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:01.010312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:01.052009 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:01.230769 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:01.510328 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:01.552144 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:01.732106 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:02.010884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:02.052084 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:02.230927 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:02.512458 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:02.552729 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:02.731487 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:03.010739 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:03.052270 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:03.230875 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:03.511574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:03.553107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:03.731603 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:04.193942 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:04.194389 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:04.231564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:04.510775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:04.551910 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:04.731010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:05.010565 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:05.051766 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:05.231878 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:05.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:05.552012 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:05.731266 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:06.010819 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:06.051758 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:06.230573 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:06.656273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:06.657626 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:06.833411 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:07.010555 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:07.054659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:07.239812 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:07.510886 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:07.551625 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:07.730512 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:08.010801 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:08.052573 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:08.230496 2137369 kapi.go:107] duration metric: took 1m15.004285338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0120 15:07:08.512826 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:08.552138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:09.011987 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:09.051989 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:09.510790 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:09.552296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:10.011148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:10.052537 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:10.511355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:10.551839 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:11.011519 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:11.110503 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:11.511730 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:11.611811 2137369 kapi.go:107] duration metric: took 1m14.063637565s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0120 15:07:11.613849 2137369 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823768 cluster.
	I0120 15:07:11.615491 2137369 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0120 15:07:11.616833 2137369 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0120 15:07:12.010475 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:12.511601 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:13.010504 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:13.512426 2137369 kapi.go:107] duration metric: took 1m18.006867517s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0120 15:07:13.514225 2137369 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, ingress-dns, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0120 15:07:13.515460 2137369 addons.go:514] duration metric: took 1m28.574436568s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner inspektor-gadget ingress-dns cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0120 15:07:13.515500 2137369 start.go:246] waiting for cluster config update ...
	I0120 15:07:13.515518 2137369 start.go:255] writing updated cluster config ...
	I0120 15:07:13.515785 2137369 ssh_runner.go:195] Run: rm -f paused
	I0120 15:07:13.569861 2137369 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 15:07:13.571716 2137369 out.go:177] * Done! kubectl is now configured to use "addons-823768" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.021384300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824021352767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68471ec6-58c5-4ceb-a509-e61733e9b6a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022208901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022333801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022809776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.070361299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f89b4d3e-c2b4-498f-bd23-1e72bb1af901 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.070473415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f89b4d3e-c2b4-498f-bd23-1e72bb1af901 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.071884946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e49afd5-c3be-402c-8e8f-b899c4a78498 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.073524503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824073493807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e49afd5-c3be-402c-8e8f-b899c4a78498 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.074113674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.074188775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.075007873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.115280940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e34c247e-8cec-4041-a527-61aa10dbc7b2 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.115375109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e34c247e-8cec-4041-a527-61aa10dbc7b2 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.116559888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c054c405-873e-42b2-82d5-89abb40f8421 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.117945629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824117912949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c054c405-873e-42b2-82d5-89abb40f8421 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.118900994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.118964950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.119547331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.157489158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bda5514-7eb1-4980-96e1-79aced01d84e name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.157587785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bda5514-7eb1-4980-96e1-79aced01d84e name=/runtime.v1.RuntimeService/Version
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.159073364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04316000-d561-4c0f-94e1-feaad129e982 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.160534626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824160503840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04316000-d561-4c0f-94e1-feaad129e982 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161133560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161191632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161739766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0d939b4caf08e       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                                              2 minutes ago       Running             nginx                                    0                   3a519efe9b038       nginx
	56341dccb27e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   0cb85977d13d8       busybox
	739724295d0f2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   178d147355f56       csi-hostpathplugin-gnx78
	b4f42aa541558       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   178d147355f56       csi-hostpathplugin-gnx78
	0564a081b1cc3       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             3 minutes ago       Running             controller                               0                   330b0828d8f12       ingress-nginx-controller-56d7c84fd4-g5ctf
	a2e97ce48722b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   178d147355f56       csi-hostpathplugin-gnx78
	559997a706e3d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   178d147355f56       csi-hostpathplugin-gnx78
	4c46266f6f3f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   178d147355f56       csi-hostpathplugin-gnx78
	ebd723b834f32       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   36fc01dd96c91       csi-hostpath-resizer-0
	407fa55d66c41       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   178d147355f56       csi-hostpathplugin-gnx78
	4ac2dabca6c91       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   5d17831bf379a       csi-hostpath-attacher-0
	44f6d3f5ab703       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                                             3 minutes ago       Exited              patch                                    1                   fdc33fccc554a       ingress-nginx-admission-patch-xh2h7
	2b2fa6de7c4e6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   3 minutes ago       Exited              create                                   0                   94a7aef39e703       ingress-nginx-admission-create-6vqcs
	c5d3228e30e2f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   f8f344c34f1e0       snapshot-controller-68b874b76f-wz6d5
	36030becf6c98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   aef77a63ec4ad       snapshot-controller-68b874b76f-v9qfd
	f0ab0f9b0c618       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             4 minutes ago       Running             minikube-ingress-dns                     0                   718d4440e9db0       kube-ingress-dns-minikube
	97181205fe8bf       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   ee23f086b04ce       amd-gpu-device-plugin-hd9wh
	6eeabacb6e6ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   fca109f861e41       storage-provisioner
	3ad760d35635f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             4 minutes ago       Running             coredns                                  0                   136274a1e784b       coredns-668d6bf9bc-5vcsv
	a16679188eadc       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                                             4 minutes ago       Running             kube-proxy                               0                   3eb31a3186fcb       kube-proxy-7rvmm
	3e011bb870926       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                                             4 minutes ago       Running             etcd                                     0                   11457cc606696       etcd-addons-823768
	2e3f3a7d8000f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                                             4 minutes ago       Running             kube-apiserver                           0                   04698fdd92bec       kube-apiserver-addons-823768
	2e3453aa93d27       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                                             4 minutes ago       Running             kube-scheduler                           0                   e3f2609c351e3       kube-scheduler-addons-823768
	910f65c08fb23       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                                             4 minutes ago       Running             kube-controller-manager                  0                   717d4b555e17c       kube-controller-manager-addons-823768
	
	
	==> coredns [3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71] <==
	[INFO] 10.244.0.8:56019 - 8659 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000096646s
	[INFO] 10.244.0.8:56019 - 15449 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112548s
	[INFO] 10.244.0.8:56019 - 48990 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079748s
	[INFO] 10.244.0.8:56019 - 33141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105011s
	[INFO] 10.244.0.8:56019 - 62395 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000134282s
	[INFO] 10.244.0.8:56019 - 6006 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106164s
	[INFO] 10.244.0.8:56019 - 8304 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000071529s
	[INFO] 10.244.0.8:34548 - 65505 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198464s
	[INFO] 10.244.0.8:34548 - 65209 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084361s
	[INFO] 10.244.0.8:45732 - 48780 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095991s
	[INFO] 10.244.0.8:45732 - 48577 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075237s
	[INFO] 10.244.0.8:48111 - 44007 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094432s
	[INFO] 10.244.0.8:48111 - 44175 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051812s
	[INFO] 10.244.0.8:54661 - 45113 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011191s
	[INFO] 10.244.0.8:54661 - 44955 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125648s
	[INFO] 10.244.0.23:32815 - 41479 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317766s
	[INFO] 10.244.0.23:55241 - 46997 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000113506s
	[INFO] 10.244.0.23:32971 - 50582 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126524s
	[INFO] 10.244.0.23:56239 - 4615 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106422s
	[INFO] 10.244.0.23:46110 - 53295 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146317s
	[INFO] 10.244.0.23:57583 - 28036 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092403s
	[INFO] 10.244.0.23:41341 - 34430 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001860332s
	[INFO] 10.244.0.23:46756 - 21526 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0017886s
	[INFO] 10.244.0.28:40171 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000434828s
	[INFO] 10.244.0.28:36379 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000227115s
	
	
	==> describe nodes <==
	Name:               addons-823768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-823768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=addons-823768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-823768
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-823768"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 15:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-823768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 15:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 15:08:13 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 15:08:13 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 15:08:13 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 15:08:13 +0000   Mon, 20 Jan 2025 15:05:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-823768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ed69cfbae1c49d5a2adeea9f9d7ada9
	  System UUID:                2ed69cfb-ae1c-49d5-a2ad-eea9f9d7ada9
	  Boot ID:                    5745ae5a-4581-4558-8316-987961d0b42c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-world-app-7d9564db4-njdj6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-g5ctf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m32s
	  kube-system                 amd-gpu-device-plugin-hd9wh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 coredns-668d6bf9bc-5vcsv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m40s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 csi-hostpathplugin-gnx78                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-addons-823768                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m44s
	  kube-system                 kube-apiserver-addons-823768                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-addons-823768        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-7rvmm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-scheduler-addons-823768                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 snapshot-controller-68b874b76f-v9qfd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-68b874b76f-wz6d5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m36s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s  kubelet          Node addons-823768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s  kubelet          Node addons-823768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s  kubelet          Node addons-823768 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m44s  kubelet          Node addons-823768 status is now: NodeReady
	  Normal  RegisteredNode           4m41s  node-controller  Node addons-823768 event: Registered Node addons-823768 in Controller
	
	
	==> dmesg <==
	[  +4.423973] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.581285] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.484443] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.073928] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.273691] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.162747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.214655] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.210113] kauditd_printk_skb: 116 callbacks suppressed
	[Jan20 15:06] kauditd_printk_skb: 110 callbacks suppressed
	[ +19.139201] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.836806] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.754303] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.384646] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.188215] kauditd_printk_skb: 39 callbacks suppressed
	[Jan20 15:07] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.189656] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.820035] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.687193] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.564951] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.553515] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.060585] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.298488] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.013101] kauditd_printk_skb: 4 callbacks suppressed
	[Jan20 15:08] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.389449] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9] <==
	{"level":"warn","ts":"2025-01-20T15:06:53.622686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:06:53.214746Z","time spent":"407.928378ms","remote":"127.0.0.1:33576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T15:06:53.622815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.236562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:06:53.622921Z","caller":"traceutil/trace.go:171","msg":"trace[1179903400] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1026; }","duration":"128.363318ms","start":"2025-01-20T15:06:53.494550Z","end":"2025-01-20T15:06:53.622913Z","steps":["trace[1179903400] 'agreement among raft nodes before linearized reading'  (duration: 128.237512ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:04.175891Z","caller":"traceutil/trace.go:171","msg":"trace[1687271508] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"182.064453ms","start":"2025-01-20T15:07:03.993807Z","end":"2025-01-20T15:07:04.175872Z","steps":["trace[1687271508] 'read index received'  (duration: 177.609883ms)","trace[1687271508] 'applied index is now lower than readState.Index'  (duration: 4.453716ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:07:04.176082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.222905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:04.176101Z","caller":"traceutil/trace.go:171","msg":"trace[892120133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"182.312552ms","start":"2025-01-20T15:07:03.993783Z","end":"2025-01-20T15:07:04.176096Z","steps":["trace[892120133] 'agreement among raft nodes before linearized reading'  (duration: 182.222961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:04.176369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.332376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:04.176412Z","caller":"traceutil/trace.go:171","msg":"trace[2055759117] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"140.400644ms","start":"2025-01-20T15:07:04.036004Z","end":"2025-01-20T15:07:04.176405Z","steps":["trace[2055759117] 'agreement among raft nodes before linearized reading'  (duration: 140.338392ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:06.637079Z","caller":"traceutil/trace.go:171","msg":"trace[138422048] linearizableReadLoop","detail":"{readStateIndex:1135; appliedIndex:1134; }","duration":"144.033657ms","start":"2025-01-20T15:07:06.493032Z","end":"2025-01-20T15:07:06.637065Z","steps":["trace[138422048] 'read index received'  (duration: 143.913692ms)","trace[138422048] 'applied index is now lower than readState.Index'  (duration: 119.506µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:07:06.637381Z","caller":"traceutil/trace.go:171","msg":"trace[1309473886] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"261.49564ms","start":"2025-01-20T15:07:06.375877Z","end":"2025-01-20T15:07:06.637373Z","steps":["trace[1309473886] 'process raft request'  (duration: 261.110224ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.637533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.488772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.637569Z","caller":"traceutil/trace.go:171","msg":"trace[673675145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"144.554654ms","start":"2025-01-20T15:07:06.493009Z","end":"2025-01-20T15:07:06.637563Z","steps":["trace[673675145] 'agreement among raft nodes before linearized reading'  (duration: 144.494071ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.637663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.810086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.637693Z","caller":"traceutil/trace.go:171","msg":"trace[790918003] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1102; }","duration":"142.850687ms","start":"2025-01-20T15:07:06.494838Z","end":"2025-01-20T15:07:06.637689Z","steps":["trace[790918003] 'agreement among raft nodes before linearized reading'  (duration: 142.808266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.639008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.824656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.639058Z","caller":"traceutil/trace.go:171","msg":"trace[1373366693] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.929966ms","start":"2025-01-20T15:07:06.536120Z","end":"2025-01-20T15:07:06.639050Z","steps":["trace[1373366693] 'agreement among raft nodes before linearized reading'  (duration: 102.854164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.816709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.709345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.816810Z","caller":"traceutil/trace.go:171","msg":"trace[710433880] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.836863ms","start":"2025-01-20T15:07:06.713959Z","end":"2025-01-20T15:07:06.816796Z","steps":["trace[710433880] 'range keys from in-memory index tree'  (duration: 102.636914ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:38.235473Z","caller":"traceutil/trace.go:171","msg":"trace[541629752] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1316; }","duration":"424.155374ms","start":"2025-01-20T15:07:37.811290Z","end":"2025-01-20T15:07:38.235445Z","steps":["trace[541629752] 'process raft request'  (duration: 424.050614ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:38.235864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:07:37.811275Z","time spent":"424.383764ms","remote":"127.0.0.1:33794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:865 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"info","ts":"2025-01-20T15:07:38.236302Z","caller":"traceutil/trace.go:171","msg":"trace[1154709480] linearizableReadLoop","detail":"{readStateIndex:1357; appliedIndex:1357; }","duration":"294.273699ms","start":"2025-01-20T15:07:37.942019Z","end":"2025-01-20T15:07:38.236292Z","steps":["trace[1154709480] 'read index received'  (duration: 294.270262ms)","trace[1154709480] 'applied index is now lower than readState.Index'  (duration: 2.637µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:07:38.237026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.993346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:38.237394Z","caller":"traceutil/trace.go:171","msg":"trace[1054917839] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1316; }","duration":"295.389592ms","start":"2025-01-20T15:07:37.941996Z","end":"2025-01-20T15:07:38.237385Z","steps":["trace[1054917839] 'agreement among raft nodes before linearized reading'  (duration: 294.979507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:38.237161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.437044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:38.237673Z","caller":"traceutil/trace.go:171","msg":"trace[2072934601] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"232.963958ms","start":"2025-01-20T15:07:38.004697Z","end":"2025-01-20T15:07:38.237661Z","steps":["trace[2072934601] 'agreement among raft nodes before linearized reading'  (duration: 232.442407ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:10:24 up 5 min,  0 users,  load average: 1.07, 1.63, 0.85
	Linux addons-823768 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061] <==
	W0120 15:06:37.744311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 15:06:37.744398       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 15:06:37.745499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 15:06:37.745571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0120 15:06:41.753217       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0120 15:06:41.753511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 15:06:41.753603       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 15:06:41.754487       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0120 15:06:41.791340       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0120 15:07:22.363530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54916: use of closed network connection
	E0120 15:07:22.559816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54946: use of closed network connection
	I0120 15:07:32.120012       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.207.216"}
	I0120 15:07:57.385722       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0120 15:07:58.513579       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0120 15:07:58.845666       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0120 15:08:02.223494       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0120 15:08:02.404784       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.212.216"}
	I0120 15:08:42.776322       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0120 15:10:22.830795       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.53.157"}
	
	
	==> kube-controller-manager [910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6] <==
	E0120 15:08:26.413138       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="daemonsets.apps is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"daemonsets\" in API group \"apps\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="apps/v1, Resource=daemonsets"
	E0120 15:08:26.417807       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="replicationcontrollers is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"replicationcontrollers\" in API group \"\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="/v1, Resource=replicationcontrollers"
	E0120 15:08:26.421903       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="secrets is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="/v1, Resource=secrets"
	E0120 15:08:26.426042       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="replicasets.apps is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"replicasets\" in API group \"apps\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="apps/v1, Resource=replicasets"
	I0120 15:08:31.434321       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0120 15:08:32.388018       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:08:32.389024       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:08:32.390074       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:08:32.390107       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 15:08:58.482023       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:08:58.482933       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:08:58.483988       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:08:58.484035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 15:09:28.638875       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:09:28.639966       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:09:28.640878       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:09:28.640978       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 15:10:20.359661       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:10:20.360969       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:10:20.361955       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:10:20.362017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 15:10:22.658421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="44.48692ms"
	I0120 15:10:22.682540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.006785ms"
	I0120 15:10:22.682624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="34.34µs"
	I0120 15:10:22.707043       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="39.409µs"
	
	
	==> kube-proxy [a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:05:47.687355       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:05:47.705944       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0120 15:05:47.706029       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:05:47.853450       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:05:47.853525       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:05:47.855376       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:05:47.891958       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:05:47.892217       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:05:47.892280       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:05:47.910195       1 config.go:199] "Starting service config controller"
	I0120 15:05:47.910308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:05:47.910348       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:05:47.910353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:05:47.916092       1 config.go:329] "Starting node config controller"
	I0120 15:05:47.916125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:05:48.012507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:05:48.012551       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:05:48.021389       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115] <==
	E0120 15:05:37.227928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:37.226486       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:05:37.227950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0120 15:05:37.225889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:37.228648       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:05:37.228765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.059356       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 15:05:38.059463       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.140355       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 15:05:38.140406       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.152627       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 15:05:38.152684       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 15:05:38.196083       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:05:38.196182       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.358544       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 15:05:38.358698       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.425806       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 15:05:38.425899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.480160       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:05:38.480210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.527483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 15:05:38.527585       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.532584       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 15:05:38.532947       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 15:05:40.619676       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 15:09:30 addons-823768 kubelet[1231]: E0120 15:09:30.247076    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385770246571745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:09:34 addons-823768 kubelet[1231]: E0120 15:09:34.960929    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
	Jan 20 15:09:39 addons-823768 kubelet[1231]: E0120 15:09:39.983153    1231 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 15:09:39 addons-823768 kubelet[1231]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 15:09:39 addons-823768 kubelet[1231]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 15:09:39 addons-823768 kubelet[1231]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 15:09:39 addons-823768 kubelet[1231]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 15:09:40 addons-823768 kubelet[1231]: E0120 15:09:40.250459    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385780249884823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:09:40 addons-823768 kubelet[1231]: E0120 15:09:40.250632    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385780249884823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:09:50 addons-823768 kubelet[1231]: E0120 15:09:50.253582    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385790253140080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:09:50 addons-823768 kubelet[1231]: E0120 15:09:50.253613    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385790253140080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:00 addons-823768 kubelet[1231]: E0120 15:10:00.258653    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385800257701607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:00 addons-823768 kubelet[1231]: E0120 15:10:00.258759    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385800257701607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:04 addons-823768 kubelet[1231]: I0120 15:10:04.960885    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 20 15:10:09 addons-823768 kubelet[1231]: I0120 15:10:09.961174    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hd9wh" secret="" err="secret \"gcp-auth\" not found"
	Jan 20 15:10:10 addons-823768 kubelet[1231]: E0120 15:10:10.262958    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385810262380730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:10 addons-823768 kubelet[1231]: E0120 15:10:10.263068    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385810262380730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.601818    1231 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.602145    1231 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.602423    1231 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr84p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.603674    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
	Jan 20 15:10:20 addons-823768 kubelet[1231]: E0120 15:10:20.265608    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385820265041453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:20 addons-823768 kubelet[1231]: E0120 15:10:20.265653    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385820265041453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:10:22 addons-823768 kubelet[1231]: I0120 15:10:22.668184    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="3d8d1e3d-79bc-45d9-ab92-9203d7b75946" containerName="local-path-provisioner"
	Jan 20 15:10:22 addons-823768 kubelet[1231]: I0120 15:10:22.754051    1231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbczl\" (UniqueName: \"kubernetes.io/projected/11eae7f9-7cd6-44da-b989-0b800a978cc2-kube-api-access-pbczl\") pod \"hello-world-app-7d9564db4-njdj6\" (UID: \"11eae7f9-7cd6-44da-b989-0b800a978cc2\") " pod="default/hello-world-app-7d9564db4-njdj6"
	
	
	==> storage-provisioner [6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6] <==
	I0120 15:05:54.184778       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:05:54.216206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:05:54.216325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 15:05:54.246555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:05:54.246676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
	I0120 15:05:54.250017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7873a08-e047-4b60-90dd-2fa00f314b75", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7 became leader
	I0120 15:05:54.347624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823768 -n addons-823768
helpers_test.go:261: (dbg) Run:  kubectl --context addons-823768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7: exit status 1 (75.310693ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-njdj6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823768/192.168.39.158
	Start Time:       Mon, 20 Jan 2025 15:10:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbczl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbczl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-njdj6 to addons-823768
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823768/192.168.39.158
	Start Time:       Mon, 20 Jan 2025 15:08:04 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr84p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-gr84p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m21s                default-scheduler  Successfully assigned default/task-pv-pod to addons-823768
	  Warning  Failed     108s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    51s (x2 over 108s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     51s (x2 over 108s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    38s (x3 over 2m19s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7s (x3 over 108s)    kubelet            Error: ErrImagePull
	  Warning  Failed     7s (x2 over 66s)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6vqcs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xh2h7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable ingress-dns --alsologtostderr -v=1: (1.511424985s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable ingress --alsologtostderr -v=1: (7.797746092s)
--- FAIL: TestAddons/parallel/Ingress (152.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (387.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 15:07:46.905097 2136749 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 15:07:46.909983 2136749 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 15:07:46.910017 2136749 kapi.go:107] duration metric: took 4.946193ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.956568ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-823768 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-823768 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a] Pending
helpers_test.go:344: "task-pv-pod" [d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823768 -n addons-823768
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-01-20 15:14:04.503738584 +0000 UTC m=+563.791169751
addons_test.go:506: (dbg) Run:  kubectl --context addons-823768 describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context addons-823768 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-823768/192.168.39.158
Start Time:       Mon, 20 Jan 2025 15:08:04 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr84p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-gr84p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-823768
Warning  Failed     5m27s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    44s (x10 over 5m27s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     44s (x10 over 5m27s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    32s (x5 over 5m58s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     1s (x5 over 5m27s)    kubelet            Error: ErrImagePull
Warning  Failed     1s (x4 over 4m45s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
addons_test.go:506: (dbg) Run:  kubectl --context addons-823768 logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context addons-823768 logs task-pv-pod -n default: exit status 1 (74.333344ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context addons-823768 logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-823768 -n addons-823768
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 logs -n 25: (1.303586652s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-647713                                                                     | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-193100                                                                     | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-647713                                                                     | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | binary-mirror-318745                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45603                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-318745                                                                     | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | addons-823768                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | addons-823768                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-823768 --wait=true                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | -p addons-823768                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823768 ssh cat                                                                       | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | /opt/local-path-provisioner/pvc-f17509c2-6d0e-4c09-9067-5f1359f0d7a1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-823768 ip                                                                            | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-823768 addons                                                                        | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823768 ssh curl -s                                                                   | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-823768 ip                                                                            | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:10 UTC | 20 Jan 25 15:10 UTC |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:10 UTC | 20 Jan 25 15:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823768 addons disable                                                                | addons-823768        | jenkins | v1.35.0 | 20 Jan 25 15:10 UTC | 20 Jan 25 15:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:04:57.624256 2137369 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:04:57.624398 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:57.624409 2137369 out.go:358] Setting ErrFile to fd 2...
	I0120 15:04:57.624415 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:57.624591 2137369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:04:57.625297 2137369 out.go:352] Setting JSON to false
	I0120 15:04:57.626292 2137369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24444,"bootTime":1737361054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:04:57.626412 2137369 start.go:139] virtualization: kvm guest
	I0120 15:04:57.628458 2137369 out.go:177] * [addons-823768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:04:57.630260 2137369 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:04:57.630256 2137369 notify.go:220] Checking for updates...
	I0120 15:04:57.631582 2137369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:04:57.633104 2137369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:04:57.634244 2137369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:57.635455 2137369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:04:57.636773 2137369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:04:57.638391 2137369 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:04:57.672908 2137369 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 15:04:57.674463 2137369 start.go:297] selected driver: kvm2
	I0120 15:04:57.674489 2137369 start.go:901] validating driver "kvm2" against <nil>
	I0120 15:04:57.674515 2137369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:04:57.675362 2137369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:57.675488 2137369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 15:04:57.691694 2137369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 15:04:57.691745 2137369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 15:04:57.691969 2137369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 15:04:57.692005 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:04:57.692050 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:04:57.692059 2137369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 15:04:57.692109 2137369 start.go:340] cluster config:
	{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0120 15:04:57.692209 2137369 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:57.695407 2137369 out.go:177] * Starting "addons-823768" primary control-plane node in "addons-823768" cluster
	I0120 15:04:57.697150 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 15:04:57.697201 2137369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 15:04:57.697211 2137369 cache.go:56] Caching tarball of preloaded images
	I0120 15:04:57.697294 2137369 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 15:04:57.697305 2137369 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 15:04:57.697657 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
	I0120 15:04:57.697681 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json: {Name:mk4b31787ffc80a58bfaed119855eddc3ee78983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:04:57.697836 2137369 start.go:360] acquireMachinesLock for addons-823768: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 15:04:57.697883 2137369 start.go:364] duration metric: took 33.177µs to acquireMachinesLock for "addons-823768"
	I0120 15:04:57.697901 2137369 start.go:93] Provisioning new machine with config: &{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 15:04:57.697959 2137369 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 15:04:57.699982 2137369 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0120 15:04:57.700137 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:04:57.700187 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:04:57.715764 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0120 15:04:57.716302 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:04:57.717042 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:04:57.717071 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:04:57.717464 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:04:57.717672 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:04:57.717839 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:04:57.718072 2137369 start.go:159] libmachine.API.Create for "addons-823768" (driver="kvm2")
	I0120 15:04:57.718100 2137369 client.go:168] LocalClient.Create starting
	I0120 15:04:57.718140 2137369 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 15:04:57.817798 2137369 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 15:04:57.956327 2137369 main.go:141] libmachine: Running pre-create checks...
	I0120 15:04:57.956353 2137369 main.go:141] libmachine: (addons-823768) Calling .PreCreateCheck
	I0120 15:04:57.956945 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:04:57.957429 2137369 main.go:141] libmachine: Creating machine...
	I0120 15:04:57.957442 2137369 main.go:141] libmachine: (addons-823768) Calling .Create
	I0120 15:04:57.957600 2137369 main.go:141] libmachine: (addons-823768) creating KVM machine...
	I0120 15:04:57.957614 2137369 main.go:141] libmachine: (addons-823768) creating network...
	I0120 15:04:57.958969 2137369 main.go:141] libmachine: (addons-823768) DBG | found existing default KVM network
	I0120 15:04:57.959704 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:57.959552 2137391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201200}
	I0120 15:04:57.959765 2137369 main.go:141] libmachine: (addons-823768) DBG | created network xml: 
	I0120 15:04:57.959786 2137369 main.go:141] libmachine: (addons-823768) DBG | <network>
	I0120 15:04:57.959800 2137369 main.go:141] libmachine: (addons-823768) DBG |   <name>mk-addons-823768</name>
	I0120 15:04:57.959807 2137369 main.go:141] libmachine: (addons-823768) DBG |   <dns enable='no'/>
	I0120 15:04:57.959814 2137369 main.go:141] libmachine: (addons-823768) DBG |   
	I0120 15:04:57.959822 2137369 main.go:141] libmachine: (addons-823768) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 15:04:57.959830 2137369 main.go:141] libmachine: (addons-823768) DBG |     <dhcp>
	I0120 15:04:57.959836 2137369 main.go:141] libmachine: (addons-823768) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 15:04:57.959845 2137369 main.go:141] libmachine: (addons-823768) DBG |     </dhcp>
	I0120 15:04:57.959850 2137369 main.go:141] libmachine: (addons-823768) DBG |   </ip>
	I0120 15:04:57.959857 2137369 main.go:141] libmachine: (addons-823768) DBG |   
	I0120 15:04:57.959868 2137369 main.go:141] libmachine: (addons-823768) DBG | </network>
	I0120 15:04:57.959880 2137369 main.go:141] libmachine: (addons-823768) DBG | 
	I0120 15:04:57.965405 2137369 main.go:141] libmachine: (addons-823768) DBG | trying to create private KVM network mk-addons-823768 192.168.39.0/24...
	I0120 15:04:58.037588 2137369 main.go:141] libmachine: (addons-823768) DBG | private KVM network mk-addons-823768 192.168.39.0/24 created
	I0120 15:04:58.037645 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.037543 2137391 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:58.037659 2137369 main.go:141] libmachine: (addons-823768) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
	I0120 15:04:58.037694 2137369 main.go:141] libmachine: (addons-823768) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 15:04:58.037727 2137369 main.go:141] libmachine: (addons-823768) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 15:04:58.314475 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.314330 2137391 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa...
	I0120 15:04:58.360414 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360209 2137391 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk...
	I0120 15:04:58.360466 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing magic tar header
	I0120 15:04:58.360505 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing SSH key tar header
	I0120 15:04:58.360517 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360380 2137391 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
	I0120 15:04:58.360543 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768
	I0120 15:04:58.360562 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 (perms=drwx------)
	I0120 15:04:58.360574 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 15:04:58.360589 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:58.360598 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 15:04:58.360610 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 15:04:58.360622 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins
	I0120 15:04:58.360631 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home
	I0120 15:04:58.360640 2137369 main.go:141] libmachine: (addons-823768) DBG | skipping /home - not owner
	I0120 15:04:58.360718 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 15:04:58.360752 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 15:04:58.360768 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 15:04:58.360782 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 15:04:58.360799 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 15:04:58.360810 2137369 main.go:141] libmachine: (addons-823768) creating domain...
	I0120 15:04:58.362224 2137369 main.go:141] libmachine: (addons-823768) define libvirt domain using xml: 
	I0120 15:04:58.362248 2137369 main.go:141] libmachine: (addons-823768) <domain type='kvm'>
	I0120 15:04:58.362256 2137369 main.go:141] libmachine: (addons-823768)   <name>addons-823768</name>
	I0120 15:04:58.362262 2137369 main.go:141] libmachine: (addons-823768)   <memory unit='MiB'>4000</memory>
	I0120 15:04:58.362267 2137369 main.go:141] libmachine: (addons-823768)   <vcpu>2</vcpu>
	I0120 15:04:58.362272 2137369 main.go:141] libmachine: (addons-823768)   <features>
	I0120 15:04:58.362277 2137369 main.go:141] libmachine: (addons-823768)     <acpi/>
	I0120 15:04:58.362282 2137369 main.go:141] libmachine: (addons-823768)     <apic/>
	I0120 15:04:58.362289 2137369 main.go:141] libmachine: (addons-823768)     <pae/>
	I0120 15:04:58.362296 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362301 2137369 main.go:141] libmachine: (addons-823768)   </features>
	I0120 15:04:58.362309 2137369 main.go:141] libmachine: (addons-823768)   <cpu mode='host-passthrough'>
	I0120 15:04:58.362325 2137369 main.go:141] libmachine: (addons-823768)   
	I0120 15:04:58.362336 2137369 main.go:141] libmachine: (addons-823768)   </cpu>
	I0120 15:04:58.362342 2137369 main.go:141] libmachine: (addons-823768)   <os>
	I0120 15:04:58.362350 2137369 main.go:141] libmachine: (addons-823768)     <type>hvm</type>
	I0120 15:04:58.362356 2137369 main.go:141] libmachine: (addons-823768)     <boot dev='cdrom'/>
	I0120 15:04:58.362363 2137369 main.go:141] libmachine: (addons-823768)     <boot dev='hd'/>
	I0120 15:04:58.362369 2137369 main.go:141] libmachine: (addons-823768)     <bootmenu enable='no'/>
	I0120 15:04:58.362377 2137369 main.go:141] libmachine: (addons-823768)   </os>
	I0120 15:04:58.362382 2137369 main.go:141] libmachine: (addons-823768)   <devices>
	I0120 15:04:58.362388 2137369 main.go:141] libmachine: (addons-823768)     <disk type='file' device='cdrom'>
	I0120 15:04:58.362397 2137369 main.go:141] libmachine: (addons-823768)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/boot2docker.iso'/>
	I0120 15:04:58.362405 2137369 main.go:141] libmachine: (addons-823768)       <target dev='hdc' bus='scsi'/>
	I0120 15:04:58.362411 2137369 main.go:141] libmachine: (addons-823768)       <readonly/>
	I0120 15:04:58.362418 2137369 main.go:141] libmachine: (addons-823768)     </disk>
	I0120 15:04:58.362432 2137369 main.go:141] libmachine: (addons-823768)     <disk type='file' device='disk'>
	I0120 15:04:58.362442 2137369 main.go:141] libmachine: (addons-823768)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 15:04:58.362450 2137369 main.go:141] libmachine: (addons-823768)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk'/>
	I0120 15:04:58.362458 2137369 main.go:141] libmachine: (addons-823768)       <target dev='hda' bus='virtio'/>
	I0120 15:04:58.362463 2137369 main.go:141] libmachine: (addons-823768)     </disk>
	I0120 15:04:58.362471 2137369 main.go:141] libmachine: (addons-823768)     <interface type='network'>
	I0120 15:04:58.362491 2137369 main.go:141] libmachine: (addons-823768)       <source network='mk-addons-823768'/>
	I0120 15:04:58.362504 2137369 main.go:141] libmachine: (addons-823768)       <model type='virtio'/>
	I0120 15:04:58.362509 2137369 main.go:141] libmachine: (addons-823768)     </interface>
	I0120 15:04:58.362514 2137369 main.go:141] libmachine: (addons-823768)     <interface type='network'>
	I0120 15:04:58.362530 2137369 main.go:141] libmachine: (addons-823768)       <source network='default'/>
	I0120 15:04:58.362537 2137369 main.go:141] libmachine: (addons-823768)       <model type='virtio'/>
	I0120 15:04:58.362542 2137369 main.go:141] libmachine: (addons-823768)     </interface>
	I0120 15:04:58.362547 2137369 main.go:141] libmachine: (addons-823768)     <serial type='pty'>
	I0120 15:04:58.362552 2137369 main.go:141] libmachine: (addons-823768)       <target port='0'/>
	I0120 15:04:58.362558 2137369 main.go:141] libmachine: (addons-823768)     </serial>
	I0120 15:04:58.362565 2137369 main.go:141] libmachine: (addons-823768)     <console type='pty'>
	I0120 15:04:58.362579 2137369 main.go:141] libmachine: (addons-823768)       <target type='serial' port='0'/>
	I0120 15:04:58.362587 2137369 main.go:141] libmachine: (addons-823768)     </console>
	I0120 15:04:58.362594 2137369 main.go:141] libmachine: (addons-823768)     <rng model='virtio'>
	I0120 15:04:58.362642 2137369 main.go:141] libmachine: (addons-823768)       <backend model='random'>/dev/random</backend>
	I0120 15:04:58.362668 2137369 main.go:141] libmachine: (addons-823768)     </rng>
	I0120 15:04:58.362682 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362694 2137369 main.go:141] libmachine: (addons-823768)     
	I0120 15:04:58.362704 2137369 main.go:141] libmachine: (addons-823768)   </devices>
	I0120 15:04:58.362716 2137369 main.go:141] libmachine: (addons-823768) </domain>
	I0120 15:04:58.362728 2137369 main.go:141] libmachine: (addons-823768) 
	I0120 15:04:58.367308 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:fe:73:ee in network default
	I0120 15:04:58.367817 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:58.367831 2137369 main.go:141] libmachine: (addons-823768) starting domain...
	I0120 15:04:58.367843 2137369 main.go:141] libmachine: (addons-823768) ensuring networks are active...
	I0120 15:04:58.368477 2137369 main.go:141] libmachine: (addons-823768) Ensuring network default is active
	I0120 15:04:58.368765 2137369 main.go:141] libmachine: (addons-823768) Ensuring network mk-addons-823768 is active
	I0120 15:04:58.369246 2137369 main.go:141] libmachine: (addons-823768) getting domain XML...
	I0120 15:04:58.369915 2137369 main.go:141] libmachine: (addons-823768) creating domain...
	I0120 15:04:59.601024 2137369 main.go:141] libmachine: (addons-823768) waiting for IP...
	I0120 15:04:59.602003 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:59.602406 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:04:59.602487 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.602427 2137391 retry.go:31] will retry after 258.668513ms: waiting for domain to come up
	I0120 15:04:59.863113 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:04:59.863860 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:04:59.863887 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.863820 2137391 retry.go:31] will retry after 284.943032ms: waiting for domain to come up
	I0120 15:05:00.150387 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:00.150799 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:00.150864 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.150788 2137391 retry.go:31] will retry after 487.888334ms: waiting for domain to come up
	I0120 15:05:00.640607 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:00.641049 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:00.641074 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.640997 2137391 retry.go:31] will retry after 506.402264ms: waiting for domain to come up
	I0120 15:05:01.148692 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:01.149072 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:01.149103 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.149042 2137391 retry.go:31] will retry after 610.710776ms: waiting for domain to come up
	I0120 15:05:01.761084 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:01.761615 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:01.761660 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.761555 2137391 retry.go:31] will retry after 869.953856ms: waiting for domain to come up
	I0120 15:05:02.632849 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:02.633348 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:02.633383 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:02.633307 2137391 retry.go:31] will retry after 878.477724ms: waiting for domain to come up
	I0120 15:05:03.512981 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:03.513483 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:03.513516 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:03.513425 2137391 retry.go:31] will retry after 1.196488457s: waiting for domain to come up
	I0120 15:05:04.711923 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:04.712468 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:04.712555 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:04.712444 2137391 retry.go:31] will retry after 1.238217465s: waiting for domain to come up
	I0120 15:05:05.952338 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:05.952718 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:05.952767 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:05.952682 2137391 retry.go:31] will retry after 1.963992606s: waiting for domain to come up
	I0120 15:05:07.919115 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:07.919614 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:07.919688 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:07.919591 2137391 retry.go:31] will retry after 2.598377206s: waiting for domain to come up
	I0120 15:05:10.519561 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:10.519995 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:10.520062 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:10.519979 2137391 retry.go:31] will retry after 2.387749397s: waiting for domain to come up
	I0120 15:05:12.909148 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:12.909462 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:12.909482 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:12.909426 2137391 retry.go:31] will retry after 3.566319877s: waiting for domain to come up
	I0120 15:05:16.480251 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:16.480589 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
	I0120 15:05:16.480632 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:16.480539 2137391 retry.go:31] will retry after 5.139483327s: waiting for domain to come up
	I0120 15:05:21.624584 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.625210 2137369 main.go:141] libmachine: (addons-823768) found domain IP: 192.168.39.158
	I0120 15:05:21.625248 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has current primary IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.625255 2137369 main.go:141] libmachine: (addons-823768) reserving static IP address...
	I0120 15:05:21.625737 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find host DHCP lease matching {name: "addons-823768", mac: "52:54:00:25:8d:22", ip: "192.168.39.158"} in network mk-addons-823768
	I0120 15:05:21.704346 2137369 main.go:141] libmachine: (addons-823768) DBG | Getting to WaitForSSH function...
	I0120 15:05:21.704393 2137369 main.go:141] libmachine: (addons-823768) reserved static IP address 192.168.39.158 for domain addons-823768
	I0120 15:05:21.704447 2137369 main.go:141] libmachine: (addons-823768) waiting for SSH...
	I0120 15:05:21.707052 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.707627 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.707662 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.707819 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH client type: external
	I0120 15:05:21.707849 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa (-rw-------)
	I0120 15:05:21.707888 2137369 main.go:141] libmachine: (addons-823768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 15:05:21.707907 2137369 main.go:141] libmachine: (addons-823768) DBG | About to run SSH command:
	I0120 15:05:21.707924 2137369 main.go:141] libmachine: (addons-823768) DBG | exit 0
	I0120 15:05:21.831180 2137369 main.go:141] libmachine: (addons-823768) DBG | SSH cmd err, output: <nil>: 
	I0120 15:05:21.831428 2137369 main.go:141] libmachine: (addons-823768) KVM machine creation complete
	I0120 15:05:21.831824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:05:21.832433 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:21.832624 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:21.832787 2137369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 15:05:21.832803 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:21.834150 2137369 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 15:05:21.834163 2137369 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 15:05:21.834169 2137369 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 15:05:21.834174 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:21.836638 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.836979 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.837011 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.837216 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:21.837461 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.837656 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.837855 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:21.838060 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:21.838317 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:21.838332 2137369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 15:05:21.938133 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 15:05:21.938165 2137369 main.go:141] libmachine: Detecting the provisioner...
	I0120 15:05:21.938176 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:21.941079 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.941442 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:21.941472 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:21.941599 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:21.941824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.942016 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:21.942197 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:21.942359 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:21.942538 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:21.942550 2137369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 15:05:22.044310 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 15:05:22.044405 2137369 main.go:141] libmachine: found compatible host: buildroot
	I0120 15:05:22.044421 2137369 main.go:141] libmachine: Provisioning with buildroot...
	I0120 15:05:22.044435 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.044699 2137369 buildroot.go:166] provisioning hostname "addons-823768"
	I0120 15:05:22.044733 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.044923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.047943 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.048353 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.048374 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.048517 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.048723 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.048877 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.048970 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.049121 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.049312 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.049324 2137369 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-823768 && echo "addons-823768" | sudo tee /etc/hostname
	I0120 15:05:22.166123 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823768
	
	I0120 15:05:22.166193 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.169246 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.169621 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.169659 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.169836 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.170038 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.170186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.170305 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.170495 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.170736 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.170762 2137369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-823768' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823768/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-823768' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 15:05:22.280555 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 15:05:22.280595 2137369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 15:05:22.280622 2137369 buildroot.go:174] setting up certificates
	I0120 15:05:22.280638 2137369 provision.go:84] configureAuth start
	I0120 15:05:22.280654 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
	I0120 15:05:22.281026 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:22.283951 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.284335 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.284358 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.284533 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.286813 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.287192 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.287215 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.287344 2137369 provision.go:143] copyHostCerts
	I0120 15:05:22.287426 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 15:05:22.287580 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 15:05:22.287682 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 15:05:22.287769 2137369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.addons-823768 san=[127.0.0.1 192.168.39.158 addons-823768 localhost minikube]
	I0120 15:05:22.401850 2137369 provision.go:177] copyRemoteCerts
	I0120 15:05:22.401946 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 15:05:22.401974 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.405186 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.405681 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.405710 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.405977 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.406213 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.406368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.406524 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:22.489134 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 15:05:22.514579 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 15:05:22.539697 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 15:05:22.564884 2137369 provision.go:87] duration metric: took 284.22466ms to configureAuth
	I0120 15:05:22.564927 2137369 buildroot.go:189] setting minikube options for container-runtime
	I0120 15:05:22.565156 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:22.565249 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.568228 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.568661 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.568706 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.568801 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.569007 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.569179 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.569341 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.569501 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.569699 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.569716 2137369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 15:05:22.802503 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 15:05:22.802531 2137369 main.go:141] libmachine: Checking connection to Docker...
	I0120 15:05:22.802540 2137369 main.go:141] libmachine: (addons-823768) Calling .GetURL
	I0120 15:05:22.803962 2137369 main.go:141] libmachine: (addons-823768) DBG | using libvirt version 6000000
	I0120 15:05:22.806234 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.806594 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.806655 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.806814 2137369 main.go:141] libmachine: Docker is up and running!
	I0120 15:05:22.806829 2137369 main.go:141] libmachine: Reticulating splines...
	I0120 15:05:22.806837 2137369 client.go:171] duration metric: took 25.088726295s to LocalClient.Create
	I0120 15:05:22.806864 2137369 start.go:167] duration metric: took 25.088792622s to libmachine.API.Create "addons-823768"
	I0120 15:05:22.806874 2137369 start.go:293] postStartSetup for "addons-823768" (driver="kvm2")
	I0120 15:05:22.806886 2137369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 15:05:22.806906 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:22.807197 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 15:05:22.807222 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.809507 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.809856 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.809877 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.810074 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.810283 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.810491 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.810686 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:22.893410 2137369 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 15:05:22.897799 2137369 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 15:05:22.897835 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 15:05:22.897908 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 15:05:22.897935 2137369 start.go:296] duration metric: took 91.053195ms for postStartSetup
	I0120 15:05:22.897999 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
	I0120 15:05:22.898651 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:22.902713 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.903149 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.903182 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.903416 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
	I0120 15:05:22.903615 2137369 start.go:128] duration metric: took 25.205644985s to createHost
	I0120 15:05:22.903638 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:22.905563 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.905853 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:22.905900 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:22.905949 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:22.906149 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.906296 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:22.906429 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:22.906664 2137369 main.go:141] libmachine: Using SSH client type: native
	I0120 15:05:22.906868 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0120 15:05:22.906880 2137369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 15:05:23.008106 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737385522.980460970
	
	I0120 15:05:23.008135 2137369 fix.go:216] guest clock: 1737385522.980460970
	I0120 15:05:23.008143 2137369 fix.go:229] Guest: 2025-01-20 15:05:22.98046097 +0000 UTC Remote: 2025-01-20 15:05:22.903626964 +0000 UTC m=+25.320898969 (delta=76.834006ms)
	I0120 15:05:23.008215 2137369 fix.go:200] guest clock delta is within tolerance: 76.834006ms
	I0120 15:05:23.008230 2137369 start.go:83] releasing machines lock for "addons-823768", held for 25.310337319s
	I0120 15:05:23.008265 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.008613 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:23.011490 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.011849 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.011878 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.012093 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012681 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012869 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:23.012984 2137369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 15:05:23.013034 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:23.013163 2137369 ssh_runner.go:195] Run: cat /version.json
	I0120 15:05:23.013186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:23.015959 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016170 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016408 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.016434 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016609 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:23.016700 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:23.016732 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:23.016845 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:23.016912 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:23.016984 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:23.017055 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:23.017119 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:23.017164 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:23.017332 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:23.091913 2137369 ssh_runner.go:195] Run: systemctl --version
	I0120 15:05:23.122269 2137369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 15:05:23.875612 2137369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 15:05:23.882266 2137369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 15:05:23.882347 2137369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 15:05:23.900478 2137369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 15:05:23.900506 2137369 start.go:495] detecting cgroup driver to use...
	I0120 15:05:23.900575 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 15:05:23.918752 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 15:05:23.934434 2137369 docker.go:217] disabling cri-docker service (if available) ...
	I0120 15:05:23.934503 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 15:05:23.948970 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 15:05:23.963860 2137369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 15:05:24.085254 2137369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 15:05:24.229859 2137369 docker.go:233] disabling docker service ...
	I0120 15:05:24.229956 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 15:05:24.245938 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 15:05:24.260809 2137369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 15:05:24.396969 2137369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 15:05:24.518925 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 15:05:24.534100 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 15:05:24.553792 2137369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 15:05:24.553860 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.565579 2137369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 15:05:24.565658 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.577482 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.589471 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.601410 2137369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 15:05:24.613467 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.624780 2137369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.643556 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 15:05:24.655973 2137369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 15:05:24.666889 2137369 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 15:05:24.666993 2137369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 15:05:24.681872 2137369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 15:05:24.692833 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:24.816424 2137369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 15:05:24.916890 2137369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 15:05:24.917033 2137369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 15:05:24.922124 2137369 start.go:563] Will wait 60s for crictl version
	I0120 15:05:24.922223 2137369 ssh_runner.go:195] Run: which crictl
	I0120 15:05:24.926492 2137369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 15:05:24.966056 2137369 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 15:05:24.966165 2137369 ssh_runner.go:195] Run: crio --version
	I0120 15:05:25.000470 2137369 ssh_runner.go:195] Run: crio --version
	I0120 15:05:25.032126 2137369 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 15:05:25.033657 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
	I0120 15:05:25.036578 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:25.037003 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:25.037039 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:25.037400 2137369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 15:05:25.042011 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 15:05:25.055574 2137369 kubeadm.go:883] updating cluster {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 15:05:25.055706 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 15:05:25.055752 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 15:05:25.092416 2137369 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 15:05:25.092490 2137369 ssh_runner.go:195] Run: which lz4
	I0120 15:05:25.096985 2137369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 15:05:25.101643 2137369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 15:05:25.101687 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 15:05:26.559521 2137369 crio.go:462] duration metric: took 1.462632814s to copy over tarball
	I0120 15:05:26.559603 2137369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 15:05:28.881265 2137369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321627399s)
	I0120 15:05:28.881296 2137369 crio.go:469] duration metric: took 2.321738568s to extract the tarball
	I0120 15:05:28.881308 2137369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 15:05:28.923957 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 15:05:28.966345 2137369 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 15:05:28.966375 2137369 cache_images.go:84] Images are preloaded, skipping loading
	I0120 15:05:28.966384 2137369 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.32.0 crio true true} ...
	I0120 15:05:28.966505 2137369 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 15:05:28.966576 2137369 ssh_runner.go:195] Run: crio config
	I0120 15:05:29.027026 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:05:29.027056 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:05:29.027070 2137369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 15:05:29.027106 2137369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823768 NodeName:addons-823768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 15:05:29.027278 2137369 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-823768"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 15:05:29.027360 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 15:05:29.038001 2137369 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 15:05:29.038070 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 15:05:29.048357 2137369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 15:05:29.066394 2137369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 15:05:29.083817 2137369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 15:05:29.101973 2137369 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0120 15:05:29.106193 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 15:05:29.119610 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:29.229096 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 15:05:29.247908 2137369 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768 for IP: 192.168.39.158
	I0120 15:05:29.247938 2137369 certs.go:194] generating shared ca certs ...
	I0120 15:05:29.247962 2137369 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.248133 2137369 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 15:05:29.375528 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt ...
	I0120 15:05:29.375570 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt: {Name:mk95237ca492d6a8873dc0ee527d241251260641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.375788 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key ...
	I0120 15:05:29.375806 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key: {Name:mk2a2005e42e379cc392095c3323349ceaba77a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.375924 2137369 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 15:05:29.506135 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt ...
	I0120 15:05:29.506170 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt: {Name:mkbf86178b27c05eca2541aa5684eb4efb701b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.506350 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key ...
	I0120 15:05:29.506366 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key: {Name:mk482675847c9e92b5693c4a036fdcbdd07762af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.506469 2137369 certs.go:256] generating profile certs ...
	I0120 15:05:29.506569 2137369 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key
	I0120 15:05:29.506591 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt with IP's: []
	I0120 15:05:29.632374 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt ...
	I0120 15:05:29.632424 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: {Name:mk3520768cf7dae31823de6f71890b04241d6376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.632615 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key ...
	I0120 15:05:29.632631 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key: {Name:mk76119af4a5a356e887e3134370f7dc46e58fde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.632737 2137369 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5
	I0120 15:05:29.632764 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0120 15:05:29.770493 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 ...
	I0120 15:05:29.770531 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5: {Name:mk9454cdba7b3006624e137f0bfa7b68d0d57860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.770726 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 ...
	I0120 15:05:29.770744 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5: {Name:mkb77c90195774352d1df405073394964b639a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.770848 2137369 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt
	I0120 15:05:29.770966 2137369 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key
	I0120 15:05:29.771058 2137369 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key
	I0120 15:05:29.771088 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt with IP's: []
	I0120 15:05:29.886204 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt ...
	I0120 15:05:29.886243 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt: {Name:mka25cfa7c2ede2de31741302e198a7540947810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.886431 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key ...
	I0120 15:05:29.886449 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key: {Name:mk8d45c04d3d1bcd97c6423c1861ad369ae8c86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:29.886681 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 15:05:29.886732 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 15:05:29.886764 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 15:05:29.886800 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 15:05:29.887529 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 15:05:29.920958 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 15:05:29.961136 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 15:05:29.991787 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 15:05:30.017520 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 15:05:30.042540 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 15:05:30.067826 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 15:05:30.093111 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 15:05:30.120801 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 15:05:30.145867 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 15:05:30.163291 2137369 ssh_runner.go:195] Run: openssl version
	I0120 15:05:30.169332 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 15:05:30.180684 2137369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.185690 2137369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.185771 2137369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 15:05:30.192059 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 15:05:30.203678 2137369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 15:05:30.208221 2137369 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 15:05:30.208309 2137369 kubeadm.go:392] StartCluster: {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:05:30.208405 2137369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 15:05:30.208469 2137369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 15:05:30.247022 2137369 cri.go:89] found id: ""
	I0120 15:05:30.247118 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 15:05:30.257748 2137369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 15:05:30.268149 2137369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 15:05:30.279855 2137369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 15:05:30.279881 2137369 kubeadm.go:157] found existing configuration files:
	
	I0120 15:05:30.279930 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 15:05:30.290146 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 15:05:30.290227 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 15:05:30.300670 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 15:05:30.310440 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 15:05:30.310509 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 15:05:30.320924 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 15:05:30.330490 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 15:05:30.330568 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 15:05:30.340525 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 15:05:30.350412 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 15:05:30.350475 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 15:05:30.360454 2137369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 15:05:30.416929 2137369 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 15:05:30.417044 2137369 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 15:05:30.518614 2137369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 15:05:30.518741 2137369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 15:05:30.518916 2137369 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 15:05:30.540333 2137369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 15:05:30.586140 2137369 out.go:235]   - Generating certificates and keys ...
	I0120 15:05:30.586320 2137369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 15:05:30.586423 2137369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 15:05:30.724586 2137369 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 15:05:30.825694 2137369 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 15:05:30.938774 2137369 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 15:05:31.384157 2137369 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 15:05:31.450833 2137369 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 15:05:31.451192 2137369 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0120 15:05:31.753678 2137369 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 15:05:31.753966 2137369 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0120 15:05:31.832258 2137369 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 15:05:32.352824 2137369 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 15:05:32.512677 2137369 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 15:05:32.512862 2137369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 15:05:32.737640 2137369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 15:05:32.934895 2137369 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 15:05:33.168194 2137369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 15:05:33.369097 2137369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 15:05:33.571513 2137369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 15:05:33.572224 2137369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 15:05:33.577165 2137369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 15:05:33.579006 2137369 out.go:235]   - Booting up control plane ...
	I0120 15:05:33.579145 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 15:05:33.579230 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 15:05:33.579530 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 15:05:33.595480 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 15:05:33.603182 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 15:05:33.603401 2137369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 15:05:33.728727 2137369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 15:05:33.728864 2137369 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 15:05:34.245972 2137369 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.580806ms
	I0120 15:05:34.246087 2137369 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 15:05:39.244154 2137369 kubeadm.go:310] [api-check] The API server is healthy after 5.00149055s
	I0120 15:05:39.266303 2137369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 15:05:39.287362 2137369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 15:05:39.321758 2137369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 15:05:39.321956 2137369 kubeadm.go:310] [mark-control-plane] Marking the node addons-823768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 15:05:39.340511 2137369 kubeadm.go:310] [bootstrap-token] Using token: ctmxn9.z3jofwz9r9zooxkk
	I0120 15:05:39.342300 2137369 out.go:235]   - Configuring RBAC rules ...
	I0120 15:05:39.342426 2137369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 15:05:39.356885 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 15:05:39.374011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 15:05:39.378696 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 15:05:39.383011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 15:05:39.388592 2137369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 15:05:39.650709 2137369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 15:05:40.082837 2137369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 15:05:40.650481 2137369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 15:05:40.651438 2137369 kubeadm.go:310] 
	I0120 15:05:40.651502 2137369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 15:05:40.651508 2137369 kubeadm.go:310] 
	I0120 15:05:40.651580 2137369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 15:05:40.651588 2137369 kubeadm.go:310] 
	I0120 15:05:40.651645 2137369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 15:05:40.651750 2137369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 15:05:40.651833 2137369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 15:05:40.651857 2137369 kubeadm.go:310] 
	I0120 15:05:40.651920 2137369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 15:05:40.651928 2137369 kubeadm.go:310] 
	I0120 15:05:40.651964 2137369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 15:05:40.651970 2137369 kubeadm.go:310] 
	I0120 15:05:40.652010 2137369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 15:05:40.652095 2137369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 15:05:40.652198 2137369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 15:05:40.652208 2137369 kubeadm.go:310] 
	I0120 15:05:40.652305 2137369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 15:05:40.652415 2137369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 15:05:40.652426 2137369 kubeadm.go:310] 
	I0120 15:05:40.652542 2137369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
	I0120 15:05:40.652709 2137369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 15:05:40.652749 2137369 kubeadm.go:310] 	--control-plane 
	I0120 15:05:40.652769 2137369 kubeadm.go:310] 
	I0120 15:05:40.652869 2137369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 15:05:40.652877 2137369 kubeadm.go:310] 
	I0120 15:05:40.652965 2137369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
	I0120 15:05:40.653092 2137369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 15:05:40.653919 2137369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 15:05:40.653957 2137369 cni.go:84] Creating CNI manager for ""
	I0120 15:05:40.653968 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:05:40.655707 2137369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 15:05:40.657014 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 15:05:40.669371 2137369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 15:05:40.690666 2137369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 15:05:40.690750 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:40.690763 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823768 minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=addons-823768 minikube.k8s.io/primary=true
	I0120 15:05:40.817437 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:40.860939 2137369 ops.go:34] apiserver oom_adj: -16
	I0120 15:05:41.317594 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:41.818178 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:42.318320 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:42.818281 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:43.318223 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:43.818194 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.317755 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.817685 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 15:05:44.939850 2137369 kubeadm.go:1113] duration metric: took 4.249182583s to wait for elevateKubeSystemPrivileges
	I0120 15:05:44.939901 2137369 kubeadm.go:394] duration metric: took 14.731620646s to StartCluster
	I0120 15:05:44.939931 2137369 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:44.940095 2137369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:05:44.940664 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:05:44.940924 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 15:05:44.940960 2137369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 15:05:44.941029 2137369 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0120 15:05:44.941156 2137369 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823768"
	I0120 15:05:44.941168 2137369 addons.go:69] Setting default-storageclass=true in profile "addons-823768"
	I0120 15:05:44.941185 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823768"
	I0120 15:05:44.941225 2137369 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823768"
	I0120 15:05:44.941240 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:44.941236 2137369 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-823768"
	I0120 15:05:44.941261 2137369 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-823768"
	I0120 15:05:44.941263 2137369 addons.go:69] Setting ingress-dns=true in profile "addons-823768"
	I0120 15:05:44.941263 2137369 addons.go:69] Setting storage-provisioner=true in profile "addons-823768"
	I0120 15:05:44.941268 2137369 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-823768"
	I0120 15:05:44.941235 2137369 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-823768"
	I0120 15:05:44.941309 2137369 addons.go:69] Setting gcp-auth=true in profile "addons-823768"
	I0120 15:05:44.941312 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941245 2137369 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823768"
	I0120 15:05:44.941326 2137369 addons.go:69] Setting volcano=true in profile "addons-823768"
	I0120 15:05:44.941335 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941341 2137369 addons.go:238] Setting addon volcano=true in "addons-823768"
	I0120 15:05:44.941340 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823768"
	I0120 15:05:44.941351 2137369 mustload.go:65] Loading cluster: addons-823768
	I0120 15:05:44.941370 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941519 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:05:44.941724 2137369 addons.go:69] Setting volumesnapshots=true in profile "addons-823768"
	I0120 15:05:44.941738 2137369 addons.go:238] Setting addon volumesnapshots=true in "addons-823768"
	I0120 15:05:44.941738 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941760 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941762 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941764 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941775 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941254 2137369 addons.go:69] Setting registry=true in profile "addons-823768"
	I0120 15:05:44.941801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941316 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.941808 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941818 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941845 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941894 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.941803 2137369 addons.go:238] Setting addon registry=true in "addons-823768"
	I0120 15:05:44.941238 2137369 addons.go:69] Setting cloud-spanner=true in profile "addons-823768"
	I0120 15:05:44.941921 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941927 2137369 addons.go:238] Setting addon cloud-spanner=true in "addons-823768"
	I0120 15:05:44.941803 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.941300 2137369 addons.go:238] Setting addon ingress-dns=true in "addons-823768"
	I0120 15:05:44.941312 2137369 addons.go:238] Setting addon storage-provisioner=true in "addons-823768"
	I0120 15:05:44.941973 2137369 addons.go:69] Setting ingress=true in profile "addons-823768"
	I0120 15:05:44.941987 2137369 addons.go:69] Setting metrics-server=true in profile "addons-823768"
	I0120 15:05:44.942006 2137369 addons.go:238] Setting addon ingress=true in "addons-823768"
	I0120 15:05:44.942008 2137369 addons.go:238] Setting addon metrics-server=true in "addons-823768"
	I0120 15:05:44.941157 2137369 addons.go:69] Setting yakd=true in profile "addons-823768"
	I0120 15:05:44.942019 2137369 addons.go:69] Setting inspektor-gadget=true in profile "addons-823768"
	I0120 15:05:44.942024 2137369 addons.go:238] Setting addon yakd=true in "addons-823768"
	I0120 15:05:44.942029 2137369 addons.go:238] Setting addon inspektor-gadget=true in "addons-823768"
	I0120 15:05:44.941768 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942160 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942193 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942221 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942248 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942361 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942512 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942664 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942678 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942701 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942711 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942769 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.942790 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.942801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.942867 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943045 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943083 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943132 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943150 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943180 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943246 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.943315 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943346 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943438 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.943517 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.943543 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.944549 2137369 out.go:177] * Verifying Kubernetes components...
	I0120 15:05:44.946093 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 15:05:44.959677 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0120 15:05:44.960011 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42423
	I0120 15:05:44.961799 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0120 15:05:44.962224 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0120 15:05:44.975456 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.975521 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.977858 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.977901 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.977974 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.978023 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:44.979595 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979622 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979676 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979696 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979754 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979765 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.979780 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:44.979793 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:44.980051 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.980728 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.980771 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.981025 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981103 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981151 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:44.981247 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:44.981674 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.981709 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.990376 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.990438 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:44.992035 2137369 addons.go:238] Setting addon default-storageclass=true in "addons-823768"
	I0120 15:05:44.992096 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:44.992479 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:44.992535 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.012533 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
	I0120 15:05:45.013196 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.013883 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.013910 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.014339 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.015001 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.015058 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.015378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0120 15:05:45.015747 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0120 15:05:45.015858 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.016102 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.016315 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.016336 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.016397 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0120 15:05:45.016643 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.016800 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.016927 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.016938 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.017161 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.017666 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.017682 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.018062 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.018681 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.018724 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.018957 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0120 15:05:45.018991 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0120 15:05:45.019076 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0120 15:05:45.019360 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.019429 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020018 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.020059 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.020141 2137369 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-823768"
	I0120 15:05:45.020185 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:45.020270 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020464 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.020478 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.020539 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.020546 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.020581 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.020619 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0120 15:05:45.021207 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.021225 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.021347 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0120 15:05:45.021477 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.021943 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.021966 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.022029 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.022276 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.022435 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.022448 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.022804 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.022874 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.023429 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.023466 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.023660 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.023716 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.024288 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.024332 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.024435 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.024590 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.024603 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.025568 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.026786 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0120 15:05:45.028110 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0120 15:05:45.030717 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0120 15:05:45.031353 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.031932 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.031958 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.032368 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.032579 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.032756 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0120 15:05:45.034385 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:45.034810 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.034864 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.035693 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0120 15:05:45.036864 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0120 15:05:45.038073 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0120 15:05:45.039402 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0120 15:05:45.040775 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0120 15:05:45.041455 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0120 15:05:45.041873 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0120 15:05:45.041899 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0120 15:05:45.041928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.043759 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
	I0120 15:05:45.044358 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.045115 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.045137 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.045935 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.045941 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.046009 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.046409 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.046468 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.046483 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.046577 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0120 15:05:45.046798 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.047024 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.047144 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.047300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.047464 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.047824 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0120 15:05:45.047935 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.047955 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.048202 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.048664 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.048738 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.048759 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.049129 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.049227 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.049283 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.049394 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.049412 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.050189 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.050309 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.050835 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.050876 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.052764 2137369 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0120 15:05:45.054269 2137369 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 15:05:45.054297 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0120 15:05:45.054320 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.055060 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0120 15:05:45.058077 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0120 15:05:45.058077 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.058568 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.058679 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.059020 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.059104 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I0120 15:05:45.059253 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.059422 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.059552 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.063381 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.063387 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.063439 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.063467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.063536 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0120 15:05:45.063633 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0120 15:05:45.063727 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064081 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064189 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.064230 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.064315 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064334 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064318 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.064393 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.064400 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0120 15:05:45.064875 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.064889 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.064909 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.064977 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.065050 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065064 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065201 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065215 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065388 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.065403 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.065458 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.065499 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066244 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.066355 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066407 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.066452 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.066496 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.067341 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.067385 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.067968 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:45.068002 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.068008 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:45.068018 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.068091 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.068266 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.068547 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.068806 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.068847 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.070172 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.070709 2137369 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0120 15:05:45.070818 2137369 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0120 15:05:45.072050 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0120 15:05:45.072074 2137369 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.072056 2137369 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0120 15:05:45.073050 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 15:05:45.073066 2137369 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 15:05:45.073096 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.073987 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0120 15:05:45.074342 2137369 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 15:05:45.074359 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0120 15:05:45.074379 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.077368 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:45.078567 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.079183 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.080151 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:45.080251 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.080282 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.080851 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.081141 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.081161 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.081487 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 15:05:45.081508 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0120 15:05:45.081529 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.081534 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.081928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.081993 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.082052 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.082065 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.082637 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.082689 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.082714 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.083344 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.083414 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.083588 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.084012 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.084190 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.084795 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.085120 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.085772 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.085809 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.085817 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.085975 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.086122 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.086288 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.086641 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0120 15:05:45.090566 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.091280 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.091307 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.091792 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.091991 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.093777 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.094693 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0120 15:05:45.095349 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.095731 2137369 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0120 15:05:45.095944 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.095969 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.096365 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.096584 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.096879 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0120 15:05:45.096896 2137369 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0120 15:05:45.096918 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.098679 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.098923 2137369 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 15:05:45.098946 2137369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 15:05:45.098964 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.099552 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0120 15:05:45.100021 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.100586 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.100612 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.100957 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.101159 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.101709 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 15:05:45.102248 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.102752 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.102794 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0120 15:05:45.102845 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.102863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.102927 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.103224 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.103297 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.103391 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.103406 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.103603 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.103859 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.103886 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.103960 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.104015 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.104017 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.104033 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.104060 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.104106 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.104559 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.104625 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.104773 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.104831 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.104961 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.105289 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.105873 2137369 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0120 15:05:45.105892 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.106638 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.108265 2137369 out.go:177]   - Using image docker.io/busybox:stable
	I0120 15:05:45.108267 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0120 15:05:45.109934 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 15:05:45.109962 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0120 15:05:45.109986 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.109937 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 15:05:45.110047 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0120 15:05:45.110061 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.110071 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0120 15:05:45.110709 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.110716 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0120 15:05:45.111211 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.111237 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.111479 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.116371 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116407 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.116419 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.116428 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0120 15:05:45.116376 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116503 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.116539 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.116542 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.116568 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.116736 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.116754 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.116795 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.117290 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.117300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.117478 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.117505 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.117648 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.118043 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.118047 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.118553 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.118863 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.118882 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.119027 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.119310 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.119541 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.119616 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.120800 2137369 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0120 15:05:45.121412 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.122194 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0120 15:05:45.122217 2137369 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0120 15:05:45.122238 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.122255 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.122572 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:45.122640 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:45.122951 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:45.122971 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 15:05:45.123063 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:45.123075 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:45.123096 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:45.123119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:45.123485 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:45.123498 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 15:05:45.123579 2137369 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0120 15:05:45.124438 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 15:05:45.124457 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 15:05:45.124477 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.126513 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.126830 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.126866 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.127075 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.127259 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.127582 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.127743 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.128516 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0120 15:05:45.129042 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.129062 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.129786 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.129811 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.129822 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.129863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.130021 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.130296 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.130295 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.130504 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.130640 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.130691 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.131987 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0120 15:05:45.132360 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.132609 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:45.133420 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:45.133445 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:45.133859 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:45.134118 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:45.134354 2137369 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0120 15:05:45.135691 2137369 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0120 15:05:45.135713 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0120 15:05:45.135737 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.135899 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:45.137430 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0120 15:05:45.138646 2137369 out.go:177]   - Using image docker.io/registry:2.8.3
	I0120 15:05:45.138920 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.139336 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.139350 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.139552 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.139760 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.139903 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0120 15:05:45.139927 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0120 15:05:45.139947 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:45.139913 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.140150 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	W0120 15:05:45.141472 2137369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
	I0120 15:05:45.141659 2137369 retry.go:31] will retry after 248.832256ms: ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
	I0120 15:05:45.143344 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.143825 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:45.143969 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:45.144003 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:45.144223 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:45.144424 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:45.144580 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:45.417776 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 15:05:45.471994 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 15:05:45.472017 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 15:05:45.489674 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 15:05:45.526555 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 15:05:45.527835 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0120 15:05:45.527865 2137369 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0120 15:05:45.550223 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0120 15:05:45.550256 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0120 15:05:45.593768 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 15:05:45.603223 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 15:05:45.617896 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 15:05:45.640784 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0120 15:05:45.640819 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0120 15:05:45.663716 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0120 15:05:45.663743 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0120 15:05:45.677268 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0120 15:05:45.677311 2137369 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0120 15:05:45.703833 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 15:05:45.703861 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0120 15:05:45.714450 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 15:05:45.755610 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0120 15:05:45.755638 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0120 15:05:45.790857 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0120 15:05:45.790881 2137369 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0120 15:05:45.845887 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0120 15:05:45.887294 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0120 15:05:45.887977 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0120 15:05:45.888000 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0120 15:05:45.924864 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 15:05:45.924896 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 15:05:45.925761 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0120 15:05:45.925784 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0120 15:05:45.937497 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0120 15:05:45.937531 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0120 15:05:46.025842 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0120 15:05:46.025879 2137369 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0120 15:05:46.113217 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0120 15:05:46.142184 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 15:05:46.142236 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 15:05:46.196187 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0120 15:05:46.196215 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0120 15:05:46.211841 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0120 15:05:46.211883 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0120 15:05:46.260854 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0120 15:05:46.260889 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0120 15:05:46.349717 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 15:05:46.363946 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0120 15:05:46.363982 2137369 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0120 15:05:46.531940 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0120 15:05:46.531972 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0120 15:05:46.676731 2137369 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:46.676761 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0120 15:05:46.699767 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0120 15:05:46.911967 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0120 15:05:46.912002 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0120 15:05:47.094846 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:47.136150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.718325287s)
	I0120 15:05:47.136232 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:47.136254 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:47.136602 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:47.136623 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:47.136638 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:47.136742 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:47.137159 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:47.137183 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:47.256792 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0120 15:05:47.256827 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0120 15:05:47.600570 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0120 15:05:47.600599 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0120 15:05:48.025069 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0120 15:05:48.025100 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0120 15:05:48.180094 2137369 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.708049921s)
	I0120 15:05:48.180159 2137369 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.708105198s)
	I0120 15:05:48.180191 2137369 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 15:05:48.180283 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.690575165s)
	I0120 15:05:48.180339 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180353 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180355 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.653759394s)
	I0120 15:05:48.180401 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180419 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180669 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.180685 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.180696 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.180703 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.180826 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.180904 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.180930 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.181183 2137369 node_ready.go:35] waiting up to 6m0s for node "addons-823768" to be "Ready" ...
	I0120 15:05:48.181456 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.181473 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.181482 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.181494 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.182413 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.182432 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.182430 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.193850 2137369 node_ready.go:49] node "addons-823768" has status "Ready":"True"
	I0120 15:05:48.193881 2137369 node_ready.go:38] duration metric: took 12.636766ms for node "addons-823768" to be "Ready" ...
	I0120 15:05:48.193893 2137369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 15:05:48.246992 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:48.247119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:48.247468 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:48.247530 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:48.247542 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:48.259232 2137369 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
	I0120 15:05:48.283051 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 15:05:48.283087 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0120 15:05:48.686812 2137369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823768" context rescaled to 1 replicas
	I0120 15:05:48.755354 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 15:05:50.506941 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:51.355199 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.761376681s)
	I0120 15:05:51.355294 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.355314 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.355667 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.355754 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.355778 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.355801 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.355813 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.356189 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.356205 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.457453 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.457488 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.457937 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.458005 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.458029 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.537693 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.934416648s)
	I0120 15:05:51.537784 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.537799 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.538261 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.538287 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.538298 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:51.538307 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:51.538535 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:51.538558 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:51.538576 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:51.939586 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0120 15:05:51.939639 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:51.943517 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:51.944138 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:51.944174 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:51.944392 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:51.944662 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:51.944863 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:51.945029 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:52.359222 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0120 15:05:52.485709 2137369 addons.go:238] Setting addon gcp-auth=true in "addons-823768"
	I0120 15:05:52.485795 2137369 host.go:66] Checking if "addons-823768" exists ...
	I0120 15:05:52.486338 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:52.486410 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:52.503565 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0120 15:05:52.504038 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:52.504670 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:52.504702 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:52.505075 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:52.505679 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:05:52.505728 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:05:52.521951 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0120 15:05:52.522548 2137369 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:05:52.523148 2137369 main.go:141] libmachine: Using API Version  1
	I0120 15:05:52.523181 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:05:52.523646 2137369 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:05:52.523933 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
	I0120 15:05:52.526028 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
	I0120 15:05:52.526329 2137369 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0120 15:05:52.526368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
	I0120 15:05:52.529896 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:52.530491 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
	I0120 15:05:52.530534 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
	I0120 15:05:52.530704 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
	I0120 15:05:52.530923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
	I0120 15:05:52.531085 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
	I0120 15:05:52.531247 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
	I0120 15:05:52.803206 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:53.218889 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.600943277s)
	I0120 15:05:53.218965 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.218981 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.218975 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.504485364s)
	I0120 15:05:53.219043 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.373117907s)
	I0120 15:05:53.219086 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219102 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219059 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.331817088s)
	I0120 15:05:53.219162 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219185 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219205 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219246 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.105968154s)
	I0120 15:05:53.219283 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219298 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219406 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.869651037s)
	I0120 15:05:53.219441 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219452 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219549 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.519750539s)
	I0120 15:05:53.219567 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219576 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219695 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.124796876s)
	W0120 15:05:53.219739 2137369 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 15:05:53.219749 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219750 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219761 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219774 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219784 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219786 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219802 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219784 2137369 retry.go:31] will retry after 298.07171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 15:05:53.219827 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219835 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219830 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219847 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219851 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219856 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219857 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219861 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219868 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219885 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219885 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219896 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219905 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219868 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219912 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219916 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.219931 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.219940 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.219909 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.219947 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.219953 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.220005 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.220026 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.220032 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.220039 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.220045 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.220117 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.220129 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.220211 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:53.220226 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:53.221944 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222004 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222013 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222022 2137369 addons.go:479] Verifying addon ingress=true in "addons-823768"
	I0120 15:05:53.222034 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222059 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222065 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222071 2137369 addons.go:479] Verifying addon registry=true in "addons-823768"
	I0120 15:05:53.222245 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222266 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222270 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222283 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222298 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.222513 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.222673 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.222687 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.223993 2137369 out.go:177] * Verifying ingress addon...
	I0120 15:05:53.224095 2137369 out.go:177] * Verifying registry addon...
	I0120 15:05:53.224118 2137369 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-823768 service yakd-dashboard -n yakd-dashboard
	
	I0120 15:05:53.225572 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:53.225592 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:53.225602 2137369 addons.go:479] Verifying addon metrics-server=true in "addons-823768"
	I0120 15:05:53.225606 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:53.226182 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0120 15:05:53.226205 2137369 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0120 15:05:53.266543 2137369 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0120 15:05:53.266570 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:53.270246 2137369 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 15:05:53.270273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:53.518952 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 15:05:53.732321 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:53.733564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.236310 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:54.238382 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.734271 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:54.734269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.254274 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.254786 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:55.344689 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:55.500757 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.745335252s)
	I0120 15:05:55.500816 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:55.500835 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:55.500856 2137369 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.974495908s)
	I0120 15:05:55.501209 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:55.501237 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:55.501248 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:55.501260 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:55.501495 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:55.501519 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:55.501544 2137369 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-823768"
	I0120 15:05:55.502730 2137369 out.go:177] * Verifying csi-hostpath-driver addon...
	I0120 15:05:55.502734 2137369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 15:05:55.504940 2137369 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0120 15:05:55.505554 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0120 15:05:55.506259 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0120 15:05:55.506279 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0120 15:05:55.575714 2137369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 15:05:55.575753 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:55.671030 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0120 15:05:55.671060 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0120 15:05:55.724863 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 15:05:55.724895 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0120 15:05:55.730623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:55.733038 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:55.782983 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 15:05:56.011925 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:56.234678 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:56.234948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:56.512779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:56.731710 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:56.731852 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:56.797148 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.278133853s)
	I0120 15:05:56.797216 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:56.797235 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:56.797528 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:56.797547 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:56.797556 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:56.797563 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:56.797791 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:56.797809 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:56.797822 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.010804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:57.233033 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:57.233289 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:57.542025 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.758981808s)
	I0120 15:05:57.542087 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:57.542105 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:57.542525 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:57.542544 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:57.542553 2137369 main.go:141] libmachine: Making call to close driver server
	I0120 15:05:57.542551 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.542560 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
	I0120 15:05:57.542800 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
	I0120 15:05:57.542813 2137369 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:05:57.542824 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:05:57.543638 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:57.544206 2137369 addons.go:479] Verifying addon gcp-auth=true in "addons-823768"
	I0120 15:05:57.546233 2137369 out.go:177] * Verifying gcp-auth addon...
	I0120 15:05:57.548167 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0120 15:05:57.608035 2137369 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0120 15:05:57.608063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:57.757556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:57.758358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:57.801308 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:05:58.017964 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:58.054047 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:58.232134 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:58.232349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:58.511487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:58.552181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:58.732397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:58.732613 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:59.009728 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:59.052751 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:59.232207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:59.232938 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:05:59.511390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:05:59.552558 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:05:59.732339 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:05:59.733137 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:00.011192 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:00.052027 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:00.230588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:00.230983 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:00.265616 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:00.512889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:00.553312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:00.731731 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:00.732346 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:01.010060 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:01.052209 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:01.230828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:01.231470 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:01.535707 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:01.552390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:01.731636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:01.732100 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.011594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:02.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:02.231481 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:02.512089 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:02.552231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:02.732082 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:02.733140 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:02.765831 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:03.010323 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:03.051618 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:03.232259 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:03.232419 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:03.511102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:03.552477 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:03.731997 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:03.732052 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.012261 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:04.052901 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:04.231169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:04.231374 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.659683 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:04.660505 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:04.731959 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:04.732234 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:04.767143 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:05.010349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:05.051978 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:05.231135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:05.231273 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:05.512111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:05.552863 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:05.732208 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:05.732591 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.011307 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:06.052476 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:06.232498 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.233241 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:06.510827 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:06.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:06.981709 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:06.986582 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:06.990553 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:07.011207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:07.052592 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:07.231024 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:07.231680 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:07.511630 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:07.551889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:07.731928 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:07.732524 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.011481 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:08.051943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:08.232305 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:08.232698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.510640 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:08.552388 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:08.730939 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:08.733311 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:09.011242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:09.052916 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:09.231309 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:09.232010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:09.266856 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:09.513742 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:09.551779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:09.730962 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:09.731230 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.010833 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:10.051663 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:10.231411 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:10.232988 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.511809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:10.552186 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:10.732270 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:10.733127 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.347386 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:11.359078 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
	I0120 15:06:11.446014 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:11.446575 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:11.446659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.547362 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:11.554123 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:11.732018 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:11.732027 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.011787 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:12.051888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:12.233138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:12.233471 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.511105 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:12.552912 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:12.731592 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:12.733254 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:12.765446 2137369 pod_ready.go:93] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.765473 2137369 pod_ready.go:82] duration metric: took 24.50620135s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.765484 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.772118 2137369 pod_ready.go:93] pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.772143 2137369 pod_ready.go:82] duration metric: took 6.652598ms for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.772152 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.774084 2137369 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
	I0120 15:06:12.774108 2137369 pod_ready.go:82] duration metric: took 1.950369ms for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
	E0120 15:06:12.774119 2137369 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
	I0120 15:06:12.774125 2137369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.779574 2137369 pod_ready.go:93] pod "etcd-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.779594 2137369 pod_ready.go:82] duration metric: took 5.463343ms for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.779604 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.786673 2137369 pod_ready.go:93] pod "kube-apiserver-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.786695 2137369 pod_ready.go:82] duration metric: took 7.084094ms for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.786705 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.964107 2137369 pod_ready.go:93] pod "kube-controller-manager-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:12.964143 2137369 pod_ready.go:82] duration metric: took 177.429563ms for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:12.964159 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.010809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:13.052318 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:13.231805 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:13.232197 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:13.364952 2137369 pod_ready.go:93] pod "kube-proxy-7rvmm" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:13.364991 2137369 pod_ready.go:82] duration metric: took 400.822729ms for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.365008 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.510667 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:13.551664 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:13.732398 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:13.733063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:13.763469 2137369 pod_ready.go:93] pod "kube-scheduler-addons-823768" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:13.763497 2137369 pod_ready.go:82] duration metric: took 398.480559ms for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:13.763510 2137369 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:14.011840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:14.052929 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:14.164972 2137369 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace has status "Ready":"True"
	I0120 15:06:14.165004 2137369 pod_ready.go:82] duration metric: took 401.486108ms for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
	I0120 15:06:14.165013 2137369 pod_ready.go:39] duration metric: took 25.971110211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 15:06:14.165032 2137369 api_server.go:52] waiting for apiserver process to appear ...
	I0120 15:06:14.165104 2137369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:06:14.200902 2137369 api_server.go:72] duration metric: took 29.259888219s to wait for apiserver process to appear ...
	I0120 15:06:14.200940 2137369 api_server.go:88] waiting for apiserver healthz status ...
	I0120 15:06:14.200966 2137369 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0120 15:06:14.206516 2137369 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0120 15:06:14.207753 2137369 api_server.go:141] control plane version: v1.32.0
	I0120 15:06:14.207791 2137369 api_server.go:131] duration metric: took 6.841209ms to wait for apiserver health ...
	I0120 15:06:14.207804 2137369 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 15:06:14.233265 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:14.234965 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:14.370097 2137369 system_pods.go:59] 18 kube-system pods found
	I0120 15:06:14.370150 2137369 system_pods.go:61] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
	I0120 15:06:14.370159 2137369 system_pods.go:61] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
	I0120 15:06:14.370170 2137369 system_pods.go:61] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 15:06:14.370182 2137369 system_pods.go:61] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 15:06:14.370193 2137369 system_pods.go:61] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 15:06:14.370201 2137369 system_pods.go:61] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
	I0120 15:06:14.370206 2137369 system_pods.go:61] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
	I0120 15:06:14.370212 2137369 system_pods.go:61] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
	I0120 15:06:14.370220 2137369 system_pods.go:61] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0120 15:06:14.370228 2137369 system_pods.go:61] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
	I0120 15:06:14.370235 2137369 system_pods.go:61] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
	I0120 15:06:14.370244 2137369 system_pods.go:61] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 15:06:14.370253 2137369 system_pods.go:61] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
	I0120 15:06:14.370263 2137369 system_pods.go:61] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 15:06:14.370271 2137369 system_pods.go:61] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 15:06:14.370303 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.370312 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.370317 2137369 system_pods.go:61] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
	I0120 15:06:14.370328 2137369 system_pods.go:74] duration metric: took 162.516641ms to wait for pod list to return data ...
	I0120 15:06:14.370343 2137369 default_sa.go:34] waiting for default service account to be created ...
	I0120 15:06:14.509778 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:14.552297 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:14.563348 2137369 default_sa.go:45] found service account: "default"
	I0120 15:06:14.563381 2137369 default_sa.go:55] duration metric: took 193.030729ms for default service account to be created ...
	I0120 15:06:14.563393 2137369 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 15:06:14.730162 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:14.730276 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:14.769176 2137369 system_pods.go:87] 18 kube-system pods found
	I0120 15:06:14.964028 2137369 system_pods.go:105] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
	I0120 15:06:14.964091 2137369 system_pods.go:105] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
	I0120 15:06:14.964101 2137369 system_pods.go:105] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 15:06:14.964108 2137369 system_pods.go:105] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 15:06:14.964121 2137369 system_pods.go:105] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 15:06:14.964126 2137369 system_pods.go:105] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
	I0120 15:06:14.964133 2137369 system_pods.go:105] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
	I0120 15:06:14.964141 2137369 system_pods.go:105] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
	I0120 15:06:14.964148 2137369 system_pods.go:105] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0120 15:06:14.964153 2137369 system_pods.go:105] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
	I0120 15:06:14.964160 2137369 system_pods.go:105] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
	I0120 15:06:14.964166 2137369 system_pods.go:105] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 15:06:14.964173 2137369 system_pods.go:105] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
	I0120 15:06:14.964180 2137369 system_pods.go:105] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 15:06:14.964186 2137369 system_pods.go:105] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 15:06:14.964197 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.964205 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 15:06:14.964210 2137369 system_pods.go:105] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
	I0120 15:06:14.964220 2137369 system_pods.go:147] duration metric: took 400.820113ms to wait for k8s-apps to be running ...
	I0120 15:06:14.964230 2137369 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 15:06:14.964284 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:06:15.004824 2137369 system_svc.go:56] duration metric: took 40.572241ms WaitForService to wait for kubelet
	I0120 15:06:15.004866 2137369 kubeadm.go:582] duration metric: took 30.063861442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 15:06:15.004901 2137369 node_conditions.go:102] verifying NodePressure condition ...
	I0120 15:06:15.009936 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:15.052242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:15.164145 2137369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 15:06:15.164177 2137369 node_conditions.go:123] node cpu capacity is 2
	I0120 15:06:15.164191 2137369 node_conditions.go:105] duration metric: took 159.284808ms to run NodePressure ...
	I0120 15:06:15.164204 2137369 start.go:241] waiting for startup goroutines ...
	I0120 15:06:15.230651 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:15.230956 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:15.510392 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:15.552212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:15.732107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:15.732654 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:16.010121 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:16.053180 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:16.232364 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:16.232798 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:16.511718 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:16.552275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:16.731858 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:16.732386 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:17.010874 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:17.051623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:17.231000 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:17.232412 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:17.510781 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:17.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:17.733072 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:17.733322 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:18.010148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:18.051291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:18.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:18.232743 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:18.512092 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:18.552325 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:18.731432 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:18.731830 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:19.279723 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:19.279804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:19.280003 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:19.280489 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:19.510898 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:19.552506 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:19.730594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:19.731187 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:20.010579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:20.052401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:20.230980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:20.231222 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:20.510335 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:20.551579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:20.731061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:20.731252 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:21.010169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:21.052654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:21.229930 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:21.230449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:21.510623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:21.552046 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:21.731181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:21.731380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:22.011222 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:22.052269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:22.231033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:22.232123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:22.510785 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:22.610273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:22.731847 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:22.732017 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:23.010161 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:23.051461 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:23.232240 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:23.232266 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:23.511226 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:23.552179 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:23.732405 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:23.732643 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:24.010952 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:24.052795 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:24.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:24.231982 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:24.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:24.551620 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:24.730311 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:24.730951 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.011086 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:25.051840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:25.236485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:25.237121 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.513580 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:25.551665 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:25.744969 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:25.745049 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:26.014786 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:26.054803 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:26.240066 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:26.240329 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:26.510110 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:26.552530 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:26.737921 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:26.743487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.013212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:27.055873 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:27.231178 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.233505 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:27.512769 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:27.551845 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:27.731474 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:27.731923 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:28.010313 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:28.052515 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:28.231843 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:28.232624 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:28.511885 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:28.552260 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:28.732295 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:28.732302 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:29.012191 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:29.052216 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:29.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:29.232716 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:29.511140 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:29.662141 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:29.737287 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:29.737522 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:30.011355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:30.051923 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:30.231542 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:30.232918 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:30.511333 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:30.552399 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:30.731397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:30.731994 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:31.010820 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:31.052300 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:31.232512 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:31.232915 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:31.511124 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:31.552129 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:31.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:31.732929 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.010958 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:32.052413 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:32.232661 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:32.232713 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.512609 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:32.551853 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:32.731496 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:32.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 15:06:33.012493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:33.051613 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:33.230151 2137369 kapi.go:107] duration metric: took 40.003969564s to wait for kubernetes.io/minikube-addons=registry ...
	I0120 15:06:33.231162 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:33.511111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:33.552356 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:33.731499 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:34.013686 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:34.052825 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:34.231068 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:34.511033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:34.552588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:34.730166 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:35.031945 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:35.061605 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:35.234449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:35.510502 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:35.559244 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:35.731057 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:36.010493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:36.051808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:36.232380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:36.509698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:36.552621 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:36.745681 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:37.010808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:37.052525 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:37.230332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:37.520327 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:37.551800 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:37.881777 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:38.009937 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:38.052520 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:38.230366 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:38.511231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:38.551738 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:38.731132 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:39.010276 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:39.109985 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:39.231136 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:39.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:39.552736 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:39.730944 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:40.011296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:40.052529 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:40.231123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:40.511777 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:40.551973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:40.731032 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:41.010973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:41.052526 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:41.231947 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:41.512073 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:41.552896 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:41.731178 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:42.010888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:42.052437 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:42.231031 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:42.511849 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:42.552177 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:42.731349 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:43.010828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:43.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:43.230567 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:43.512691 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:43.552341 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:43.731593 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:44.249538 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:44.250135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:44.250242 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:44.511995 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:44.553372 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:44.730853 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:45.011136 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:45.051417 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:45.230955 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:45.510980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:45.553271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:45.730966 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:46.011246 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:46.051775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:46.230803 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:46.510401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:46.552603 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:46.731699 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:47.011501 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:47.052513 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:47.232159 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:47.511022 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:47.553343 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:47.732640 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:48.013715 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:48.053087 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:48.232696 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:48.511319 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:48.555075 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:48.732744 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:49.023367 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:49.057856 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:49.230358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:49.512138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:49.552102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:49.732022 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:50.011032 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:50.052346 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:50.232198 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:50.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:50.551759 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:50.730597 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:51.010485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:51.052574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:51.231216 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:51.510010 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:51.552151 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:51.731248 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:52.009643 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:52.057120 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:52.231103 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:52.514636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:52.553051 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:52.732413 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:53.010980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:53.052832 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:53.642329 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:53.643036 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:53.650490 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:53.731286 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:54.013633 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:54.113275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:54.231589 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:54.511224 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:54.552851 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:54.730371 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:55.010217 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:55.051884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:55.231352 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:55.517291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:55.616101 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:55.731154 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:56.010679 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:56.051666 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:56.231039 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:56.512038 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:56.554197 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:56.734633 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:57.011271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:57.052479 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:57.229871 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:57.511654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:57.551574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:57.730415 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:58.010561 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:58.052189 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:58.231332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:58.511608 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:58.554449 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:58.738948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:59.011428 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:59.051900 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:59.240098 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:06:59.530091 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:06:59.559468 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:06:59.734879 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:00.010375 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:00.051846 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:00.231614 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:00.511807 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:00.552559 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:00.731087 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:01.010312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:01.052009 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:01.230769 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:01.510328 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:01.552144 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:01.732106 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:02.010884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:02.052084 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:02.230927 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:02.512458 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:02.552729 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:02.731487 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:03.010739 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:03.052270 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:03.230875 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:03.511574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:03.553107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:03.731603 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:04.193942 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:04.194389 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:04.231564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:04.510775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:04.551910 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:04.731010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:05.010565 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:05.051766 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:05.231878 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:05.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:05.552012 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:05.731266 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:06.010819 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:06.051758 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:06.230573 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:06.656273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:06.657626 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:06.833411 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:07.010555 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:07.054659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:07.239812 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:07.510886 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:07.551625 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:07.730512 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 15:07:08.010801 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:08.052573 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:08.230496 2137369 kapi.go:107] duration metric: took 1m15.004285338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0120 15:07:08.512826 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:08.552138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:09.011987 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:09.051989 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:09.510790 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:09.552296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:10.011148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:10.052537 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:10.511355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:10.551839 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:11.011519 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:11.110503 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 15:07:11.511730 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:11.611811 2137369 kapi.go:107] duration metric: took 1m14.063637565s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0120 15:07:11.613849 2137369 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823768 cluster.
	I0120 15:07:11.615491 2137369 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0120 15:07:11.616833 2137369 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0120 15:07:12.010475 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:12.511601 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:13.010504 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 15:07:13.512426 2137369 kapi.go:107] duration metric: took 1m18.006867517s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0120 15:07:13.514225 2137369 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, ingress-dns, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0120 15:07:13.515460 2137369 addons.go:514] duration metric: took 1m28.574436568s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner inspektor-gadget ingress-dns cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0120 15:07:13.515500 2137369 start.go:246] waiting for cluster config update ...
	I0120 15:07:13.515518 2137369 start.go:255] writing updated cluster config ...
	I0120 15:07:13.515785 2137369 ssh_runner.go:195] Run: rm -f paused
	I0120 15:07:13.569861 2137369 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 15:07:13.571716 2137369 out.go:177] * Done! kubectl is now configured to use "addons-823768" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.411031112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386045411003969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6e19d96-956b-470e-9394-c2452313d74c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.412004041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85c3d2b9-778e-4145-bfd5-4e4a86726618 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.412060403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85c3d2b9-778e-4145-bfd5-4e4a86726618 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.412589594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-918
0-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9edda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Imag
e:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisi
oner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&
ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1667918
8eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43
373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd1997
43d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7
876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec3143
8bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85c3d2b9-778e-4145-bfd5-4e4a86726618 name=/runtime.v1.RuntimeService/ListCont
ainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.455191722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98c3087d-1ab3-4b80-bc07-4fd99dffffaa name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.455348787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98c3087d-1ab3-4b80-bc07-4fd99dffffaa name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.457446268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37be4b97-8c96-4774-bef3-322472221abc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.458589913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386045458562487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37be4b97-8c96-4774-bef3-322472221abc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.459175615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2da5feaf-f6e1-495a-b837-6330c2282407 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.459386921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2da5feaf-f6e1-495a-b837-6330c2282407 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.460043660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-918
0-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9edda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Imag
e:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisi
oner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&
ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1667918
8eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43
373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd1997
43d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7
876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec3143
8bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2da5feaf-f6e1-495a-b837-6330c2282407 name=/runtime.v1.RuntimeService/ListCont
ainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.503128206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4bb77c3-f107-40df-b083-cddfa1e6c54b name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.503211039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4bb77c3-f107-40df-b083-cddfa1e6c54b name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.504641346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0c2d80f-fe30-41bf-801f-2d4cf1ad6bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.506787883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386045506700773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0c2d80f-fe30-41bf-801f-2d4cf1ad6bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.507866548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=030199a6-3d82-4308-9cde-531fb7e2411d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.507926590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=030199a6-3d82-4308-9cde-531fb7e2411d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.508407167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-918
0-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9edda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Imag
e:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisi
oner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&
ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1667918
8eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43
373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd1997
43d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7
876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec3143
8bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=030199a6-3d82-4308-9cde-531fb7e2411d name=/runtime.v1.RuntimeService/ListCont
ainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.551789976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92d3c651-6e13-4bce-8808-a26b2df3373a name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.551865771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92d3c651-6e13-4bce-8808-a26b2df3373a name=/runtime.v1.RuntimeService/Version
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.553394013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfc6ab8c-6d6f-4638-8fc6-4e31c3d4366a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.555056517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386045554994847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfc6ab8c-6d6f-4638-8fc6-4e31c3d4366a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.555751424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a300f94a-05d0-41c8-bcd1-a97a80d209f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.555804446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a300f94a-05d0-41c8-bcd1-a97a80d209f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:14:05 addons-823768 crio[664]: time="2025-01-20 15:14:05.556511137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-918
0-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9edda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Imag
e:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisi
oner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&
ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1667918
8eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43
373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd1997
43d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7
876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec3143
8bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a300f94a-05d0-41c8-bcd1-a97a80d209f9 name=/runtime.v1.RuntimeService/ListCont
ainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0d939b4caf08e       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                                              5 minutes ago       Running             nginx                                    0                   3a519efe9b038       nginx
	56341dccb27e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   0cb85977d13d8       busybox
	739724295d0f2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   178d147355f56       csi-hostpathplugin-gnx78
	b4f42aa541558       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   178d147355f56       csi-hostpathplugin-gnx78
	a2e97ce48722b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   178d147355f56       csi-hostpathplugin-gnx78
	559997a706e3d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   178d147355f56       csi-hostpathplugin-gnx78
	4c46266f6f3f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   178d147355f56       csi-hostpathplugin-gnx78
	ebd723b834f32       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   36fc01dd96c91       csi-hostpath-resizer-0
	407fa55d66c41       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   178d147355f56       csi-hostpathplugin-gnx78
	4ac2dabca6c91       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   5d17831bf379a       csi-hostpath-attacher-0
	c5d3228e30e2f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   f8f344c34f1e0       snapshot-controller-68b874b76f-wz6d5
	36030becf6c98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   aef77a63ec4ad       snapshot-controller-68b874b76f-v9qfd
	97181205fe8bf       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   ee23f086b04ce       amd-gpu-device-plugin-hd9wh
	6eeabacb6e6ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   fca109f861e41       storage-provisioner
	3ad760d35635f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   136274a1e784b       coredns-668d6bf9bc-5vcsv
	a16679188eadc       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                                             8 minutes ago       Running             kube-proxy                               0                   3eb31a3186fcb       kube-proxy-7rvmm
	3e011bb870926       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                                             8 minutes ago       Running             etcd                                     0                   11457cc606696       etcd-addons-823768
	2e3f3a7d8000f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                                             8 minutes ago       Running             kube-apiserver                           0                   04698fdd92bec       kube-apiserver-addons-823768
	2e3453aa93d27       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                                             8 minutes ago       Running             kube-scheduler                           0                   e3f2609c351e3       kube-scheduler-addons-823768
	910f65c08fb23       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                                             8 minutes ago       Running             kube-controller-manager                  0                   717d4b555e17c       kube-controller-manager-addons-823768
	
	
	==> coredns [3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71] <==
	[INFO] 10.244.0.22:47799 - 33541 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000149407s
	[INFO] 10.244.0.22:47799 - 60484 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098736s
	[INFO] 10.244.0.22:39240 - 35235 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091271s
	[INFO] 10.244.0.22:47799 - 18540 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000623s
	[INFO] 10.244.0.22:39240 - 29589 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063168s
	[INFO] 10.244.0.22:47799 - 17110 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009037s
	[INFO] 10.244.0.22:39240 - 45053 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000121832s
	[INFO] 10.244.0.22:39240 - 17367 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064111s
	[INFO] 10.244.0.22:39240 - 23962 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057851s
	[INFO] 10.244.0.22:39240 - 23130 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066432s
	[INFO] 10.244.0.22:39240 - 55567 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072352s
	[INFO] 10.244.0.22:60429 - 52347 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000152457s
	[INFO] 10.244.0.22:40142 - 20306 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090399s
	[INFO] 10.244.0.22:60429 - 18033 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000788128s
	[INFO] 10.244.0.22:40142 - 41073 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011065s
	[INFO] 10.244.0.22:40142 - 25082 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000277675s
	[INFO] 10.244.0.22:60429 - 25266 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066673s
	[INFO] 10.244.0.22:40142 - 38527 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071234s
	[INFO] 10.244.0.22:40142 - 27152 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116722s
	[INFO] 10.244.0.22:60429 - 47417 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094372s
	[INFO] 10.244.0.22:40142 - 39425 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064307s
	[INFO] 10.244.0.22:60429 - 32253 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064252s
	[INFO] 10.244.0.22:40142 - 7167 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090261s
	[INFO] 10.244.0.22:60429 - 11566 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000095007s
	[INFO] 10.244.0.22:60429 - 40 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076238s
	
	
	==> describe nodes <==
	Name:               addons-823768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-823768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=addons-823768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-823768
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-823768"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 15:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-823768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 15:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 15:11:16 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 15:11:16 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 15:11:16 +0000   Mon, 20 Jan 2025 15:05:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 15:11:16 +0000   Mon, 20 Jan 2025 15:05:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-823768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ed69cfbae1c49d5a2adeea9f9d7ada9
	  System UUID:                2ed69cfb-ae1c-49d5-a2ad-eea9f9d7ada9
	  Boot ID:                    5745ae5a-4581-4558-8316-987961d0b42c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  default                     hello-world-app-7d9564db4-njdj6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  default                     task-pv-pod                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 amd-gpu-device-plugin-hd9wh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 coredns-668d6bf9bc-5vcsv                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m21s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 csi-hostpathplugin-gnx78                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 etcd-addons-823768                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m25s
	  kube-system                 kube-apiserver-addons-823768             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-controller-manager-addons-823768    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-7rvmm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-addons-823768             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 snapshot-controller-68b874b76f-v9qfd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 snapshot-controller-68b874b76f-wz6d5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m17s  kube-proxy       
	  Normal  Starting                 8m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s  kubelet          Node addons-823768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s  kubelet          Node addons-823768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s  kubelet          Node addons-823768 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m25s  kubelet          Node addons-823768 status is now: NodeReady
	  Normal  RegisteredNode           8m22s  node-controller  Node addons-823768 event: Registered Node addons-823768 in Controller
	
	
	==> dmesg <==
	[  +5.484443] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.073928] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.273691] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.162747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.214655] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.210113] kauditd_printk_skb: 116 callbacks suppressed
	[Jan20 15:06] kauditd_printk_skb: 110 callbacks suppressed
	[ +19.139201] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.836806] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.754303] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.384646] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.188215] kauditd_printk_skb: 39 callbacks suppressed
	[Jan20 15:07] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.189656] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.820035] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.687193] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.564951] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.553515] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.060585] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.298488] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.013101] kauditd_printk_skb: 4 callbacks suppressed
	[Jan20 15:08] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.389449] kauditd_printk_skb: 25 callbacks suppressed
	[Jan20 15:10] kauditd_printk_skb: 12 callbacks suppressed
	[ +25.504053] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9] <==
	{"level":"warn","ts":"2025-01-20T15:06:53.622686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:06:53.214746Z","time spent":"407.928378ms","remote":"127.0.0.1:33576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T15:06:53.622815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.236562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:06:53.622921Z","caller":"traceutil/trace.go:171","msg":"trace[1179903400] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1026; }","duration":"128.363318ms","start":"2025-01-20T15:06:53.494550Z","end":"2025-01-20T15:06:53.622913Z","steps":["trace[1179903400] 'agreement among raft nodes before linearized reading'  (duration: 128.237512ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:04.175891Z","caller":"traceutil/trace.go:171","msg":"trace[1687271508] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"182.064453ms","start":"2025-01-20T15:07:03.993807Z","end":"2025-01-20T15:07:04.175872Z","steps":["trace[1687271508] 'read index received'  (duration: 177.609883ms)","trace[1687271508] 'applied index is now lower than readState.Index'  (duration: 4.453716ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:07:04.176082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.222905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:04.176101Z","caller":"traceutil/trace.go:171","msg":"trace[892120133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"182.312552ms","start":"2025-01-20T15:07:03.993783Z","end":"2025-01-20T15:07:04.176096Z","steps":["trace[892120133] 'agreement among raft nodes before linearized reading'  (duration: 182.222961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:04.176369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.332376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:04.176412Z","caller":"traceutil/trace.go:171","msg":"trace[2055759117] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"140.400644ms","start":"2025-01-20T15:07:04.036004Z","end":"2025-01-20T15:07:04.176405Z","steps":["trace[2055759117] 'agreement among raft nodes before linearized reading'  (duration: 140.338392ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:06.637079Z","caller":"traceutil/trace.go:171","msg":"trace[138422048] linearizableReadLoop","detail":"{readStateIndex:1135; appliedIndex:1134; }","duration":"144.033657ms","start":"2025-01-20T15:07:06.493032Z","end":"2025-01-20T15:07:06.637065Z","steps":["trace[138422048] 'read index received'  (duration: 143.913692ms)","trace[138422048] 'applied index is now lower than readState.Index'  (duration: 119.506µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:07:06.637381Z","caller":"traceutil/trace.go:171","msg":"trace[1309473886] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"261.49564ms","start":"2025-01-20T15:07:06.375877Z","end":"2025-01-20T15:07:06.637373Z","steps":["trace[1309473886] 'process raft request'  (duration: 261.110224ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.637533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.488772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.637569Z","caller":"traceutil/trace.go:171","msg":"trace[673675145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"144.554654ms","start":"2025-01-20T15:07:06.493009Z","end":"2025-01-20T15:07:06.637563Z","steps":["trace[673675145] 'agreement among raft nodes before linearized reading'  (duration: 144.494071ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.637663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.810086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.637693Z","caller":"traceutil/trace.go:171","msg":"trace[790918003] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1102; }","duration":"142.850687ms","start":"2025-01-20T15:07:06.494838Z","end":"2025-01-20T15:07:06.637689Z","steps":["trace[790918003] 'agreement among raft nodes before linearized reading'  (duration: 142.808266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.639008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.824656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.639058Z","caller":"traceutil/trace.go:171","msg":"trace[1373366693] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.929966ms","start":"2025-01-20T15:07:06.536120Z","end":"2025-01-20T15:07:06.639050Z","steps":["trace[1373366693] 'agreement among raft nodes before linearized reading'  (duration: 102.854164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:06.816709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.709345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:06.816810Z","caller":"traceutil/trace.go:171","msg":"trace[710433880] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.836863ms","start":"2025-01-20T15:07:06.713959Z","end":"2025-01-20T15:07:06.816796Z","steps":["trace[710433880] 'range keys from in-memory index tree'  (duration: 102.636914ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:07:38.235473Z","caller":"traceutil/trace.go:171","msg":"trace[541629752] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1316; }","duration":"424.155374ms","start":"2025-01-20T15:07:37.811290Z","end":"2025-01-20T15:07:38.235445Z","steps":["trace[541629752] 'process raft request'  (duration: 424.050614ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:38.235864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:07:37.811275Z","time spent":"424.383764ms","remote":"127.0.0.1:33794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:865 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"info","ts":"2025-01-20T15:07:38.236302Z","caller":"traceutil/trace.go:171","msg":"trace[1154709480] linearizableReadLoop","detail":"{readStateIndex:1357; appliedIndex:1357; }","duration":"294.273699ms","start":"2025-01-20T15:07:37.942019Z","end":"2025-01-20T15:07:38.236292Z","steps":["trace[1154709480] 'read index received'  (duration: 294.270262ms)","trace[1154709480] 'applied index is now lower than readState.Index'  (duration: 2.637µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:07:38.237026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.993346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:38.237394Z","caller":"traceutil/trace.go:171","msg":"trace[1054917839] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1316; }","duration":"295.389592ms","start":"2025-01-20T15:07:37.941996Z","end":"2025-01-20T15:07:38.237385Z","steps":["trace[1054917839] 'agreement among raft nodes before linearized reading'  (duration: 294.979507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:07:38.237161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.437044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:07:38.237673Z","caller":"traceutil/trace.go:171","msg":"trace[2072934601] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"232.963958ms","start":"2025-01-20T15:07:38.004697Z","end":"2025-01-20T15:07:38.237661Z","steps":["trace[2072934601] 'agreement among raft nodes before linearized reading'  (duration: 232.442407ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:14:05 up 9 min,  0 users,  load average: 0.23, 0.94, 0.74
	Linux addons-823768 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061] <==
	W0120 15:06:37.744311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 15:06:37.744398       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 15:06:37.745499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 15:06:37.745571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0120 15:06:41.753217       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0120 15:06:41.753511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 15:06:41.753603       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 15:06:41.754487       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0120 15:06:41.791340       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0120 15:07:22.363530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54916: use of closed network connection
	E0120 15:07:22.559816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54946: use of closed network connection
	I0120 15:07:32.120012       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.207.216"}
	I0120 15:07:57.385722       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0120 15:07:58.513579       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0120 15:07:58.845666       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0120 15:08:02.223494       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0120 15:08:02.404784       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.212.216"}
	I0120 15:08:42.776322       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0120 15:10:22.830795       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.53.157"}
	
	
	==> kube-controller-manager [910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6] <==
	E0120 15:10:59.600667       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:10:59.601378       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:10:59.601432       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 15:11:08.978544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.232µs"
	I0120 15:11:16.940398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="addons-823768"
	W0120 15:11:45.783283       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:11:45.784565       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:11:45.785420       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:11:45.785495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 15:11:51.983361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="149.609µs"
	I0120 15:12:03.984810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="112.878µs"
	W0120 15:12:21.021642       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:12:21.022503       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:12:21.023180       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:12:21.023284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 15:12:55.980004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="71.765µs"
	I0120 15:13:06.977891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="55.984µs"
	W0120 15:13:10.972795       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:13:10.973918       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:13:10.974821       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:13:10.974911       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 15:13:53.978193       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 15:13:53.979107       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 15:13:53.980126       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 15:13:53.980173       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:05:47.687355       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:05:47.705944       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0120 15:05:47.706029       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:05:47.853450       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:05:47.853525       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:05:47.855376       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:05:47.891958       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:05:47.892217       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:05:47.892280       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:05:47.910195       1 config.go:199] "Starting service config controller"
	I0120 15:05:47.910308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:05:47.910348       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:05:47.910353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:05:47.916092       1 config.go:329] "Starting node config controller"
	I0120 15:05:47.916125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:05:48.012507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:05:48.012551       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:05:48.021389       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115] <==
	E0120 15:05:37.227928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:37.226486       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:05:37.227950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0120 15:05:37.225889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:37.228648       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:05:37.228765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.059356       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 15:05:38.059463       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.140355       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 15:05:38.140406       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.152627       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 15:05:38.152684       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 15:05:38.196083       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:05:38.196182       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.358544       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 15:05:38.358698       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.425806       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 15:05:38.425899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.480160       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:05:38.480210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.527483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 15:05:38.527585       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:05:38.532584       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 15:05:38.532947       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 15:05:40.619676       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 15:13:06 addons-823768 kubelet[1231]: E0120 15:13:06.962002    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-njdj6" podUID="11eae7f9-7cd6-44da-b989-0b800a978cc2"
	Jan 20 15:13:10 addons-823768 kubelet[1231]: E0120 15:13:10.329519    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385990328894985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:10 addons-823768 kubelet[1231]: E0120 15:13:10.329796    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385990328894985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:18 addons-823768 kubelet[1231]: E0120 15:13:18.961991    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-njdj6" podUID="11eae7f9-7cd6-44da-b989-0b800a978cc2"
	Jan 20 15:13:20 addons-823768 kubelet[1231]: E0120 15:13:20.333283    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386000332791817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:20 addons-823768 kubelet[1231]: E0120 15:13:20.333648    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386000332791817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:20 addons-823768 kubelet[1231]: E0120 15:13:20.960179    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
	Jan 20 15:13:30 addons-823768 kubelet[1231]: E0120 15:13:30.337671    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386010336816263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:30 addons-823768 kubelet[1231]: E0120 15:13:30.338325    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386010336816263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:39 addons-823768 kubelet[1231]: E0120 15:13:39.983176    1231 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 15:13:39 addons-823768 kubelet[1231]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 15:13:39 addons-823768 kubelet[1231]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 15:13:39 addons-823768 kubelet[1231]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 15:13:39 addons-823768 kubelet[1231]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 15:13:40 addons-823768 kubelet[1231]: E0120 15:13:40.341366    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386020340717425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:40 addons-823768 kubelet[1231]: E0120 15:13:40.341560    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386020340717425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:45 addons-823768 kubelet[1231]: I0120 15:13:45.960566    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hd9wh" secret="" err="secret \"gcp-auth\" not found"
	Jan 20 15:13:50 addons-823768 kubelet[1231]: E0120 15:13:50.343906    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386030343467318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:13:50 addons-823768 kubelet[1231]: E0120 15:13:50.344359    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386030343467318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:14:00 addons-823768 kubelet[1231]: E0120 15:14:00.349711    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386040348614185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:14:00 addons-823768 kubelet[1231]: E0120 15:14:00.349740    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386040348614185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:14:03 addons-823768 kubelet[1231]: E0120 15:14:03.628391    1231 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:14:03 addons-823768 kubelet[1231]: E0120 15:14:03.628719    1231 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:14:03 addons-823768 kubelet[1231]: E0120 15:14:03.629056    1231 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr84p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 20 15:14:03 addons-823768 kubelet[1231]: E0120 15:14:03.631210    1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
	
	
	==> storage-provisioner [6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6] <==
	I0120 15:05:54.184778       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:05:54.216206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:05:54.216325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 15:05:54.246555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:05:54.246676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
	I0120 15:05:54.250017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7873a08-e047-4b60-90dd-2fa00f314b75", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7 became leader
	I0120 15:05:54.347624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823768 -n addons-823768
helpers_test.go:261: (dbg) Run:  kubectl --context addons-823768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-njdj6 task-pv-pod
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod
helpers_test.go:282: (dbg) kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod:

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-njdj6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823768/192.168.39.158
	Start Time:       Mon, 20 Jan 2025 15:10:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:           10.244.0.32
	Controlled By:  ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbczl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbczl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m44s                default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-njdj6 to addons-823768
	  Warning  Failed     86s (x3 over 3m13s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s (x3 over 3m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x5 over 3m12s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     48s (x5 over 3m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x4 over 3m43s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823768/192.168.39.158
	Start Time:       Mon, 20 Jan 2025 15:08:04 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr84p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-gr84p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-823768
	  Warning  Failed     5m29s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    46s (x10 over 5m29s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     46s (x10 over 5m29s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x5 over 6m)      kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3s (x5 over 5m29s)    kubelet            Error: ErrImagePull
	  Warning  Failed     3s (x4 over 4m47s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.877519658s)
--- FAIL: TestAddons/parallel/CSI (387.72s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a01f9dcb-31af-4d37-b76d-1956eee4715e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00537466s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-232451 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-232451 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-232451 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-232451 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0f851fb5-106f-46d9-8980-3efab3f8ae05] Pending
helpers_test.go:344: "sp-pod" [0f851fb5-106f-46d9-8980-3efab3f8ae05] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-232451 -n functional-232451
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-01-20 15:22:51.528494497 +0000 UTC m=+1090.815925669
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-232451 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-232451 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-232451/192.168.39.125
Start Time:       Mon, 20 Jan 2025 15:19:51 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzr5d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-zzr5d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-232451
Normal   Pulling    51s (x3 over 3m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     18s (x3 over 2m29s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     18s (x3 over 2m29s)  kubelet            Error: ErrImagePull
Normal   BackOff    6s (x3 over 2m28s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     6s (x3 over 2m28s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-232451 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-232451 logs sp-pod -n default: exit status 1 (72.515526ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-232451 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-232451 -n functional-232451
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 logs -n 25: (1.556962419s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                                                      | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	|                | -p functional-232451                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /usr/share/ca-certificates/2136749.pem                                  |                   |         |         |                     |                     |
	| image          | functional-232451 image load --daemon                                   | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/21367492.pem                                             |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /usr/share/ca-certificates/21367492.pem                                 |                   |         |         |                     |                     |
	| image          | functional-232451 image load --daemon                                   | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	|                | /etc/test/nested/copy/2136749/hosts                                     |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451 image save kicbase/echo-server:functional-232451      | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451 image rm                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451 image load                                            | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh pgrep                                             | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-232451 image build -t                                        | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | localhost/my-image:functional-232451                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:19:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:19:58.279246 2145870 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:19:58.279368 2145870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.279377 2145870 out.go:358] Setting ErrFile to fd 2...
	I0120 15:19:58.279381 2145870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.279582 2145870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:19:58.280118 2145870 out.go:352] Setting JSON to false
	I0120 15:19:58.281194 2145870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25344,"bootTime":1737361054,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:19:58.281316 2145870 start.go:139] virtualization: kvm guest
	I0120 15:19:58.283366 2145870 out.go:177] * [functional-232451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:19:58.285360 2145870 notify.go:220] Checking for updates...
	I0120 15:19:58.285382 2145870 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:19:58.286963 2145870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:19:58.288488 2145870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:19:58.289836 2145870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:19:58.202926 2145841 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.203353 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.203409 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.224216 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0120 15:19:58.224903 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.225621 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.225648 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.226111 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.226362 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.226761 2145841 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.227211 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.227278 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.247151 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0120 15:19:58.248682 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.249311 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.249336 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.249827 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.250043 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.292058 2145870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:19:58.292070 2145841 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 15:19:58.293588 2145870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:19:58.293487 2145841 start.go:297] selected driver: kvm2
	I0120 15:19:58.293506 2145841 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.293670 2145841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.295749 2145841 out.go:201] 
	W0120 15:19:58.297021 2145841 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 15:19:58.298375 2145841 out.go:201] 
	I0120 15:19:58.295448 2145870 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.296003 2145870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.296090 2145870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.316614 2145870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0120 15:19:58.317198 2145870 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.317863 2145870 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.317886 2145870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.318234 2145870 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.318447 2145870 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.318710 2145870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.319020 2145870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.319070 2145870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.336227 2145870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0120 15:19:58.336775 2145870 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.337417 2145870 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.337441 2145870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.337833 2145870 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.338198 2145870 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.377387 2145870 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 15:19:58.378881 2145870 start.go:297] selected driver: kvm2
	I0120 15:19:58.378901 2145870 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.379036 2145870 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.379966 2145870 cni.go:84] Creating CNI manager for ""
	I0120 15:19:58.380051 2145870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:19:58.380129 2145870 start.go:340] cluster config:
	{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.381870 2145870 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.446510648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad13d88d-336c-493b-82dd-7d10609cc761 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.447893793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9e80d69-8cf7-4a9e-9cd3-04f13fa738df name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.448647781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386572448546195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9e80d69-8cf7-4a9e-9cd3-04f13fa738df name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.449443091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04a9188d-e207-48b1-9ae2-8a4d6357a8a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.449500686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04a9188d-e207-48b1-9ae2-8a4d6357a8a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.449921211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04a9188d-e207-48b1-9ae2-8a4d6357a8a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.486287573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0d5cec2-886f-417a-b87a-4c12638fe96a name=/runtime.v1.RuntimeService/Version
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.486361972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0d5cec2-886f-417a-b87a-4c12638fe96a name=/runtime.v1.RuntimeService/Version
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.487375097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=395d015c-a418-4845-8373-cf90e978a5da name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.488140747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386572488115643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=395d015c-a418-4845-8373-cf90e978a5da name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.489017602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=942a382b-403e-4239-83a6-d5eaf46e7d0d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.489090325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=942a382b-403e-4239-83a6-d5eaf46e7d0d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.489428653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=942a382b-403e-4239-83a6-d5eaf46e7d0d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.519013501Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e8527bbd-2681-48cb-a1e6-9ab695b4a020 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.520320213Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d7e82a5ba1cced352645bed6cda1f9436c79e7f029de090aad9a1d5ea737477e,Metadata:&PodSandboxMetadata{Name:mysql-58ccfd96bb-cr6ch,Uid:aa0d9969-9457-4b02-ae8d-3d0f15808d66,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386400919641203,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-58ccfd96bb-cr6ch,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa0d9969-9457-4b02-ae8d-3d0f15808d66,pod-template-hash: 58ccfd96bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:20:00.611072391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-krvlp,Uid:e686db64-56a3-456c-bb3b-f416d03c531d,Namespace:kuberne
tes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386400253773181,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5d59dccf9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:19:59.934745187Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-cw4db,Uid:906f8ce4-0a29-4ee1-8b2f-93941726640a,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386400233985199,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:19:59.917922714Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e90fd02927f56779dc6a973f6236009253a454fa5b26d80afd5ff2d1ecaf039b,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:0f851fb5-106f-46d9-8980-3efab3f8ae05,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386391559750792,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f851fb5-106f-46d9-8980-3efab3f8ae05,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containe
rs\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-01-20T15:19:51.252210415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1737386390874656171,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:19:50.565493164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c12
49518502ae,Metadata:&PodSandboxMetadata{Name:hello-node-connect-58f9cf68d8-fthjw,Uid:d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386386436897424,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,pod-template-hash: 58f9cf68d8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:19:46.118840305Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&PodSandboxMetadata{Name:hello-node-fcfd88b6f-xwqfj,Uid:527fee39-02e6-465a-9d2c-d3b6ba261a85,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386385876721759,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.nam
espace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,pod-template-hash: fcfd88b6f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:19:45.564772349Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-232451,Uid:9d0e5222a69976d3a3eab2b1a1c39de6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737386358493115451,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.125:8441,kubernetes.io/config.hash: 9d0e5222a69976d3a3eab2b1a1c39de6,kubernetes.io/config.seen: 2025-01-20T15:19:17.978022024Z,kubernetes.io/
config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqz6,Uid:4312ac47-67f2-426c-af6d-49b4d0b5b4cf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737386355892458749,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:18:41.263289954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-232451,Uid:62ffca5bd7ac1fb8f611729873a67c29,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737386355779965150,Labels:map[string]string{component: kube-scheduler,i
o.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62ffca5bd7ac1fb8f611729873a67c29,kubernetes.io/config.seen: 2025-01-20T15:18:37.271184353Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a01f9dcb-31af-4d37-b76d-1956eee4715e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737386355776420211,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-co
nfiguration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-20T15:18:41.263303169Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&PodSandboxMetadata{Name:etcd-functional-232451,Uid:e7a49802a9b36d8b86e4dd9e29f7fc94,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:
1737386355766107888,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.125:2379,kubernetes.io/config.hash: e7a49802a9b36d8b86e4dd9e29f7fc94,kubernetes.io/config.seen: 2025-01-20T15:18:37.271178584Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-232451,Uid:7863fe704f72ad93cda4d45c9a0ecf7d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737386355662190076,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7863fe704f72ad93cda4d45c9a0ecf7d,kubernetes.io/config.seen: 2025-01-20T15:18:37.271183572Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&PodSandboxMetadata{Name:kube-proxy-gzbbh,Uid:e8e92644-e915-4db4-a4a5-0b874ca2f0b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1737386355593697609,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:18:41.263300470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053a
f9c13058c95df8,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqz6,Uid:4312ac47-67f2-426c-af6d-49b4d0b5b4cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293292184841,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:17:25.999686299Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-232451,Uid:62ffca5bd7ac1fb8f611729873a67c29,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293219210122,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62ffca5bd7ac1fb8f611729873a67c29,kubernetes.io/config.seen: 2025-01-20T15:17:20.667722667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&PodSandboxMetadata{Name:etcd-functional-232451,Uid:e7a49802a9b36d8b86e4dd9e29f7fc94,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293195200786,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.125:2379,kubernetes.io/config.hash: e7a49802a9b36d8b86e4dd9e29f7fc94,kubernetes.io/config.seen: 2025-01-20T15:1
7:20.667712143Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-232451,Uid:7863fe704f72ad93cda4d45c9a0ecf7d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293187157236,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7863fe704f72ad93cda4d45c9a0ecf7d,kubernetes.io/config.seen: 2025-01-20T15:17:20.667721606Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&PodSandboxMetadata{Name:kube-proxy-gzbbh,Uid:e8e92644-e915-4db4-a4a5-0b874ca2f0b0,Namespace:kube-system
,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293059385877,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T15:17:25.608819230Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a01f9dcb-31af-4d37-b76d-1956eee4715e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1737386293057664321,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b
76d-1956eee4715e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-20T15:17:27.034122716Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e8527bbd-2681-48cb-a1e6-9ab695b4a020 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.521345686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63bb5055-e37c-49eb-98f1-16fddc1d6d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.521399000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63bb5055-e37c-49eb-98f1-16fddc1d6d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.522257440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63bb5055-e37c-49eb-98f1-16fddc1d6d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.534517308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50678c4e-2631-4d7b-ae11-7753bf6ce300 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.534661467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50678c4e-2631-4d7b-ae11-7753bf6ce300 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.535916731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a61d3625-a07e-47cc-966f-1912ce4ef0c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.536544681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386572536524088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a61d3625-a07e-47cc-966f-1912ce4ef0c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.539053878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49d1c17e-1180-4f08-af53-d6756d3c8b25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.539111940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49d1c17e-1180-4f08-af53-d6756d3c8b25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:22:52 functional-232451 crio[5087]: time="2025-01-20 15:22:52.540013479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49d1c17e-1180-4f08-af53-d6756d3c8b25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	613877e7c4ed3       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   2 minutes ago       Running             dashboard-metrics-scraper   0                   285b1316f24c1       dashboard-metrics-scraper-5d59dccf9b-krvlp
	0dbf2ec322001       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   b13143c58a5d1       kubernetes-dashboard-7779f9b69b-cw4db
	8eeacfc29d8db       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   4b9c1400dfa11       busybox-mount
	17a2e1a34dcb1       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   74c0962198986       hello-node-connect-58f9cf68d8-fthjw
	401f292b4b96b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   f6ceac7b67767       hello-node-fcfd88b6f-xwqfj
	65e0b3fea0b7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         3                   1ed581631233e       storage-provisioner
	85aa044ade022       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     3                   c2c319137beb3       coredns-668d6bf9bc-xrqz6
	e8736db30f30e       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                 3 minutes ago       Running             kube-proxy                  3                   dccfac5d8eb6f       kube-proxy-gzbbh
	870e2c8872b03       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                 3 minutes ago       Running             kube-apiserver              0                   e9dcc4fd75f40       kube-apiserver-functional-232451
	54ca42054826e       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago       Running             etcd                        3                   ea1d014c7afcb       etcd-functional-232451
	845b30ac779bc       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                 3 minutes ago       Running             kube-controller-manager     3                   dbd6a3ba078ef       kube-controller-manager-functional-232451
	88bf6f7a787a5       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                 3 minutes ago       Running             kube-scheduler              3                   16509e91e328b       kube-scheduler-functional-232451
	cff6ea6d7d7e8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     2                   c5020f3a85d33       coredns-668d6bf9bc-xrqz6
	f5612b5d32597       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                 4 minutes ago       Exited              kube-proxy                  2                   2de6f2d08610e       kube-proxy-gzbbh
	383d6ca5101be       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago       Exited              etcd                        2                   d34e94acbffae       etcd-functional-232451
	64578c99a681e       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                 4 minutes ago       Exited              kube-controller-manager     2                   5dfbbec07e9eb       kube-controller-manager-functional-232451
	d4369396c1ac4       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                 4 minutes ago       Exited              kube-scheduler              2                   9cd9e9eaacb5e       kube-scheduler-functional-232451
	49ca9ec35a778       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         2                   7c8a24f37c4fd       storage-provisioner
	
	
	==> coredns [85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51309 - 29307 "HINFO IN 5224763676596627299.4304674001143364677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037708884s
	
	
	==> coredns [cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45630 - 53876 "HINFO IN 7361856938211515730.8614983633791882584. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.091314299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-232451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-232451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=functional-232451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T15_17_21_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 15:17:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-232451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 15:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 15:20:53 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 15:20:53 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 15:20:53 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 15:20:53 +0000   Mon, 20 Jan 2025 15:17:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    functional-232451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd597908e4074bb881b106874105bcbc
	  System UUID:                cd597908-e407-4bb8-81b1-06874105bcbc
	  Boot ID:                    e93c83e9-7011-4881-8bd8-17bf4e9c8097
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-fthjw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-node-fcfd88b6f-xwqfj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     mysql-58ccfd96bb-cr6ch                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    2m52s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-668d6bf9bc-xrqz6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m27s
	  kube-system                 etcd-functional-232451                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m32s
	  kube-system                 kube-apiserver-functional-232451              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-controller-manager-functional-232451     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-proxy-gzbbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-functional-232451              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-krvlp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-cw4db         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m30s                  kube-proxy       
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  Starting                 4m35s                  kube-proxy       
	  Normal  Starting                 5m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m32s                  kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s                  kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s                  kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m31s                  kubelet          Node functional-232451 status is now: NodeReady
	  Normal  RegisteredNode           5m28s                  node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                   node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m34s (x8 over 3m34s)  kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s (x8 over 3m34s)  kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s (x7 over 3m34s)  kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	
	
	==> dmesg <==
	[  +0.123353] systemd-fstab-generator[2511]: Ignoring "noauto" option for root device
	[  +0.298856] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +1.608897] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.393113] kauditd_printk_skb: 201 callbacks suppressed
	[ +19.569174] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[  +4.588248] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.741155] systemd-fstab-generator[4153]: Ignoring "noauto" option for root device
	[  +0.102919] kauditd_printk_skb: 4 callbacks suppressed
	[Jan20 15:19] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[  +0.072695] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.058350] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[  +0.169422] systemd-fstab-generator[5038]: Ignoring "noauto" option for root device
	[  +0.139679] systemd-fstab-generator[5050]: Ignoring "noauto" option for root device
	[  +0.286943] systemd-fstab-generator[5078]: Ignoring "noauto" option for root device
	[  +1.447823] systemd-fstab-generator[5205]: Ignoring "noauto" option for root device
	[  +2.567601] systemd-fstab-generator[5717]: Ignoring "noauto" option for root device
	[  +0.404868] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.941639] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.382909] systemd-fstab-generator[6250]: Ignoring "noauto" option for root device
	[  +6.548145] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.098098] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.460643] kauditd_printk_skb: 35 callbacks suppressed
	[  +8.534354] kauditd_printk_skb: 15 callbacks suppressed
	[Jan20 15:20] kauditd_printk_skb: 34 callbacks suppressed
	[Jan20 15:21] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a] <==
	{"level":"info","ts":"2025-01-20T15:18:39.536435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 3"}
	{"level":"info","ts":"2025-01-20T15:18:39.536472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2025-01-20T15:18:39.536488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.543281Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:functional-232451 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T15:18:39.543299Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:18:39.543320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:18:39.544136Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:18:39.544397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:18:39.544843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T15:18:39.545112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2025-01-20T15:18:39.545260Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T15:18:39.545290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T15:19:06.863205Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-01-20T15:19:06.863246Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-232451","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"]}
	{"level":"warn","ts":"2025-01-20T15:19:06.863303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.863373Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.947403Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.125:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.947526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.125:2379: use of closed network connection"}
	{"level":"info","ts":"2025-01-20T15:19:06.947660Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4d3edba9e42b28c","current-leader-member-id":"f4d3edba9e42b28c"}
	{"level":"info","ts":"2025-01-20T15:19:06.950847Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2025-01-20T15:19:06.950999Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2025-01-20T15:19:06.951027Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-232451","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"]}
	
	
	==> etcd [54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78] <==
	{"level":"info","ts":"2025-01-20T15:19:20.439923Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:functional-232451 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T15:19:20.440155Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:19:20.440317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T15:19:20.440376Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T15:19:20.440488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:19:20.441195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:19:20.441804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T15:19:20.441199Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:19:20.442383Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"warn","ts":"2025-01-20T15:20:00.559195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.284229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:20:00.559478Z","caller":"traceutil/trace.go:171","msg":"trace[641265805] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:865; }","duration":"220.626259ms","start":"2025-01-20T15:20:00.338828Z","end":"2025-01-20T15:20:00.559454Z","steps":["trace[641265805] 'range keys from in-memory index tree'  (duration: 220.237335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:20:00.560270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.965623ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12865821531421816995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/default/mysql-rnfck\" mod_revision:0 > success:<request_put:<key:\"/registry/endpointslices/default/mysql-rnfck\" value_size:801 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-01-20T15:20:00.561891Z","caller":"traceutil/trace.go:171","msg":"trace[152791353] linearizableReadLoop","detail":"{readStateIndex:948; appliedIndex:947; }","duration":"209.36401ms","start":"2025-01-20T15:20:00.352512Z","end":"2025-01-20T15:20:00.561876Z","steps":["trace[152791353] 'read index received'  (duration: 55.935201ms)","trace[152791353] 'applied index is now lower than readState.Index'  (duration: 153.427589ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:20:00.561994Z","caller":"traceutil/trace.go:171","msg":"trace[2020145225] transaction","detail":"{read_only:false; response_revision:866; number_of_response:1; }","duration":"218.796936ms","start":"2025-01-20T15:20:00.343186Z","end":"2025-01-20T15:20:00.561983Z","steps":["trace[2020145225] 'process raft request'  (duration: 65.253987ms)","trace[2020145225] 'compare'  (duration: 150.417573ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:20:00.562120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.595088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2025-01-20T15:20:00.562502Z","caller":"traceutil/trace.go:171","msg":"trace[972187481] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:866; }","duration":"210.00022ms","start":"2025-01-20T15:20:00.352483Z","end":"2025-01-20T15:20:00.562484Z","steps":["trace[972187481] 'agreement among raft nodes before linearized reading'  (duration: 209.531681ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:00.564870Z","caller":"traceutil/trace.go:171","msg":"trace[421536517] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"162.985755ms","start":"2025-01-20T15:20:00.401875Z","end":"2025-01-20T15:20:00.564861Z","steps":["trace[421536517] 'process raft request'  (duration: 162.960382ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:00.565119Z","caller":"traceutil/trace.go:171","msg":"trace[1570227971] transaction","detail":"{read_only:false; response_revision:867; number_of_response:1; }","duration":"212.473986ms","start":"2025-01-20T15:20:00.352637Z","end":"2025-01-20T15:20:00.565111Z","steps":["trace[1570227971] 'process raft request'  (duration: 212.105806ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:28.604512Z","caller":"traceutil/trace.go:171","msg":"trace[1882643509] linearizableReadLoop","detail":"{readStateIndex:1001; appliedIndex:1000; }","duration":"293.5863ms","start":"2025-01-20T15:20:28.310911Z","end":"2025-01-20T15:20:28.604497Z","steps":["trace[1882643509] 'read index received'  (duration: 293.443781ms)","trace[1882643509] 'applied index is now lower than readState.Index'  (duration: 142.114µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:20:28.604654Z","caller":"traceutil/trace.go:171","msg":"trace[2011082731] transaction","detail":"{read_only:false; response_revision:912; number_of_response:1; }","duration":"313.03333ms","start":"2025-01-20T15:20:28.291612Z","end":"2025-01-20T15:20:28.604646Z","steps":["trace[2011082731] 'process raft request'  (duration: 312.764704ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:20:28.604829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:20:28.291555Z","time spent":"313.120726ms","remote":"127.0.0.1:34354","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:911 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-20T15:20:28.604885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.942367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:20:28.604919Z","caller":"traceutil/trace.go:171","msg":"trace[2140878658] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:912; }","duration":"294.042689ms","start":"2025-01-20T15:20:28.310868Z","end":"2025-01-20T15:20:28.604910Z","steps":["trace[2140878658] 'agreement among raft nodes before linearized reading'  (duration: 293.969366ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:21:02.106436Z","caller":"traceutil/trace.go:171","msg":"trace[1917006707] transaction","detail":"{read_only:false; response_revision:959; number_of_response:1; }","duration":"311.056462ms","start":"2025-01-20T15:21:01.795354Z","end":"2025-01-20T15:21:02.106411Z","steps":["trace[1917006707] 'process raft request'  (duration: 310.956206ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:21:02.107092Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:21:01.795274Z","time spent":"311.721675ms","remote":"127.0.0.1:34266","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":947,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/mysql-58ccfd96bb-cr6ch.181c6fc82c96f09e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/mysql-58ccfd96bb-cr6ch.181c6fc82c96f09e\" value_size:865 lease:3642449494567041384 >> failure:<>"}
	
	
	==> kernel <==
	 15:22:52 up 6 min,  0 users,  load average: 0.38, 0.51, 0.26
	Linux functional-232451 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002] <==
	I0120 15:19:21.697749       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0120 15:19:21.698366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0120 15:19:21.708460       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0120 15:19:21.709121       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0120 15:19:21.709606       1 aggregator.go:171] initial CRD sync complete...
	I0120 15:19:21.709677       1 autoregister_controller.go:144] Starting autoregister controller
	I0120 15:19:21.709701       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0120 15:19:21.709791       1 cache.go:39] Caches are synced for autoregister controller
	I0120 15:19:21.711148       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0120 15:19:22.029184       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0120 15:19:22.496745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0120 15:19:23.189710       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0120 15:19:23.254041       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0120 15:19:23.307969       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 15:19:23.314094       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0120 15:19:24.824431       1 controller.go:615] quota admission added evaluator for: endpoints
	I0120 15:19:25.071990       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 15:19:41.003244       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.217.76"}
	I0120 15:19:45.479362       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0120 15:19:45.635978       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.197.235"}
	I0120 15:19:46.182828       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.109.169"}
	I0120 15:19:59.632771       1 controller.go:615] quota admission added evaluator for: namespaces
	I0120 15:20:00.063251       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.189.216"}
	I0120 15:20:00.126499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.65.10"}
	I0120 15:20:00.304942       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.63.30"}
	
	
	==> kube-controller-manager [64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84] <==
	I0120 15:18:44.014475       1 shared_informer.go:320] Caches are synced for PVC protection
	I0120 15:18:44.016627       1 shared_informer.go:320] Caches are synced for GC
	I0120 15:18:44.016734       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0120 15:18:44.016767       1 shared_informer.go:320] Caches are synced for daemon sets
	I0120 15:18:44.016795       1 shared_informer.go:320] Caches are synced for taint
	I0120 15:18:44.016812       1 shared_informer.go:320] Caches are synced for PV protection
	I0120 15:18:44.016855       1 shared_informer.go:320] Caches are synced for TTL
	I0120 15:18:44.016888       1 shared_informer.go:320] Caches are synced for stateful set
	I0120 15:18:44.016922       1 shared_informer.go:320] Caches are synced for crt configmap
	I0120 15:18:44.016945       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0120 15:18:44.016996       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0120 15:18:44.017332       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-232451"
	I0120 15:18:44.017403       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0120 15:18:44.019132       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0120 15:18:44.019155       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0120 15:18:44.019177       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0120 15:18:44.023533       1 shared_informer.go:320] Caches are synced for resource quota
	I0120 15:18:44.034296       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0120 15:18:44.044735       1 shared_informer.go:320] Caches are synced for garbage collector
	I0120 15:18:44.058069       1 shared_informer.go:320] Caches are synced for resource quota
	I0120 15:18:44.065441       1 shared_informer.go:320] Caches are synced for garbage collector
	I0120 15:18:44.065476       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0120 15:18:44.065484       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0120 15:18:44.430436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="423.380295ms"
	I0120 15:18:44.431627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="103.109µs"
	
	
	==> kube-controller-manager [845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f] <==
	E0120 15:19:59.875749       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0120 15:19:59.876385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="10.324999ms"
	E0120 15:19:59.877041       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0120 15:19:59.927384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="50.178573ms"
	I0120 15:19:59.944461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="57.982838ms"
	I0120 15:19:59.967957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="40.520657ms"
	I0120 15:19:59.968018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="34.947µs"
	I0120 15:19:59.982466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="37.971871ms"
	I0120 15:19:59.983357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="24.879µs"
	I0120 15:20:00.033095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="50.594305ms"
	I0120 15:20:00.033182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.079µs"
	I0120 15:20:00.613281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="40.492907ms"
	I0120 15:20:00.631862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="18.545237ms"
	I0120 15:20:00.631998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="52.581µs"
	I0120 15:20:00.659972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="49.052µs"
	I0120 15:20:22.706913       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:20:29.889804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.076754ms"
	I0120 15:20:29.891999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="83.646µs"
	I0120 15:20:31.901774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.74545ms"
	I0120 15:20:31.901887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="35.434µs"
	I0120 15:20:53.137474       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:21:02.124999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="41.714µs"
	I0120 15:21:16.045842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="81.135µs"
	I0120 15:22:18.051398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="140.924µs"
	I0120 15:22:30.043939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="44.202µs"
	
	
	==> kube-proxy [e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:19:22.624899       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:19:22.634226       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0120 15:19:22.634934       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:19:22.699734       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:19:22.699787       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:19:22.699810       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:19:22.703766       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:19:22.704350       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:19:22.704379       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:19:22.706440       1 config.go:199] "Starting service config controller"
	I0120 15:19:22.706487       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:19:22.706506       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:19:22.706509       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:19:22.707175       1 config.go:329] "Starting node config controller"
	I0120 15:19:22.707204       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:19:22.806651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:19:22.806733       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:19:22.807399       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:18:41.849644       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:18:41.865270       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0120 15:18:41.865346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:18:41.916237       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:18:41.916286       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:18:41.916311       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:18:41.919632       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:18:41.919839       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:18:41.919869       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:18:41.921374       1 config.go:199] "Starting service config controller"
	I0120 15:18:41.921424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:18:41.921452       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:18:41.921457       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:18:41.922208       1 config.go:329] "Starting node config controller"
	I0120 15:18:41.922238       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:18:42.021895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:18:42.021963       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:18:42.022372       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e] <==
	I0120 15:19:19.356173       1 serving.go:386] Generated self-signed cert in-memory
	W0120 15:19:21.587484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 15:19:21.587668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 15:19:21.587699       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 15:19:21.587777       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 15:19:21.635852       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0120 15:19:21.635971       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:19:21.654955       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0120 15:19:21.656973       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0120 15:19:21.657000       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0120 15:19:21.657209       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 15:19:21.666220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346] <==
	E0120 15:18:40.793012       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793217       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 15:18:40.793314       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793459       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.793625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793761       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 15:18:40.793875       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794084       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.794201       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794335       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:18:40.794441       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794629       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:18:40.794737       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794880       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 15:18:40.796728       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.797271       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E0120 15:18:40.797388       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError"
	W0120 15:18:40.797499       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0120 15:18:40.797539       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.798124       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.797625       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 15:18:40.798266       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0120 15:18:40.798371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0120 15:18:42.271286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0120 15:19:06.842734       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.034652    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-cr6ch" podUID="aa0d9969-9457-4b02-ae8d-3d0f15808d66"
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.065314    5724 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 15:22:18 functional-232451 kubelet[5724]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 15:22:18 functional-232451 kubelet[5724]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 15:22:18 functional-232451 kubelet[5724]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 15:22:18 functional-232451 kubelet[5724]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.080894    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod4312ac47-67f2-426c-af6d-49b4d0b5b4cf/crio-c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8: Error finding container c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8: Status 404 returned error can't find the container with id c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.081520    5724 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda01f9dcb-31af-4d37-b76d-1956eee4715e/crio-7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522: Error finding container 7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522: Status 404 returned error can't find the container with id 7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.082263    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode7a49802a9b36d8b86e4dd9e29f7fc94/crio-d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662: Error finding container d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662: Status 404 returned error can't find the container with id d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.082730    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod62ffca5bd7ac1fb8f611729873a67c29/crio-9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e: Error finding container 9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e: Status 404 returned error can't find the container with id 9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.083740    5724 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode8e92644-e915-4db4-a4a5-0b874ca2f0b0/crio-2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027: Error finding container 2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027: Status 404 returned error can't find the container with id 2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.084244    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7863fe704f72ad93cda4d45c9a0ecf7d/crio-5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5: Error finding container 5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5: Status 404 returned error can't find the container with id 5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.185260    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386538184943535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:18 functional-232451 kubelet[5724]: E0120 15:22:18.185283    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386538184943535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:28 functional-232451 kubelet[5724]: E0120 15:22:28.187673    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386548186733037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:28 functional-232451 kubelet[5724]: E0120 15:22:28.188384    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386548186733037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:33 functional-232451 kubelet[5724]: E0120 15:22:33.786405    5724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:22:33 functional-232451 kubelet[5724]: E0120 15:22:33.786498    5724 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:22:33 functional-232451 kubelet[5724]: E0120 15:22:33.786879    5724 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzr5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(0f851fb5-106f-46d9-8980-3efab3f8ae05): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 20 15:22:33 functional-232451 kubelet[5724]: E0120 15:22:33.788164    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:22:38 functional-232451 kubelet[5724]: E0120 15:22:38.193229    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386558192536168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:38 functional-232451 kubelet[5724]: E0120 15:22:38.193275    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386558192536168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:45 functional-232451 kubelet[5724]: E0120 15:22:45.028353    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:22:48 functional-232451 kubelet[5724]: E0120 15:22:48.199873    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386568194924467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:22:48 functional-232451 kubelet[5724]: E0120 15:22:48.200999    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386568194924467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841] <==
	2025/01/20 15:20:29 Using namespace: kubernetes-dashboard
	2025/01/20 15:20:29 Using in-cluster config to connect to apiserver
	2025/01/20 15:20:29 Using secret token for csrf signing
	2025/01/20 15:20:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/20 15:20:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/20 15:20:29 Successful initial request to the apiserver, version: v1.32.0
	2025/01/20 15:20:29 Generating JWE encryption key
	2025/01/20 15:20:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/20 15:20:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/20 15:20:29 Initializing JWE encryption key from synchronized object
	2025/01/20 15:20:29 Creating in-cluster Sidecar client
	2025/01/20 15:20:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 15:20:29 Serving insecurely on HTTP port: 9090
	2025/01/20 15:20:59 Successful request to sidecar
	2025/01/20 15:20:29 Starting overwatch
	
	
	==> storage-provisioner [49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15] <==
	I0120 15:18:26.787999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:18:26.795905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:18:26.795959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0120 15:18:36.763915       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0120 15:18:58.403700       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:18:58.404176       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c1d9817-72a3-4111-8123-202e6b17ab9e", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9 became leader
	I0120 15:18:58.404482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9!
	I0120 15:18:58.505233       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9!
	
	
	==> storage-provisioner [65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f] <==
	I0120 15:19:22.475145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:19:22.510933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:19:22.511062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 15:19:39.914424       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:19:39.914607       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250!
	I0120 15:19:39.914965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c1d9817-72a3-4111-8123-202e6b17ab9e", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250 became leader
	I0120 15:19:40.015888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250!
	I0120 15:19:51.075092       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0120 15:19:51.076957       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"96c4895a-c488-43c5-a463-cd611ace3f4d", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0120 15:19:51.075202       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    467d9dc3-6d7f-461d-ad0c-38d5b4a52ae7 337 0 2025-01-20 15:17:26 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-20 15:17:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-96c4895a-c488-43c5-a463-cd611ace3f4d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  96c4895a-c488-43c5-a463-cd611ace3f4d 778 0 2025-01-20 15:19:51 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-01-20 15:19:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-20 15:19:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0120 15:19:51.078957       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d" provisioned
	I0120 15:19:51.079056       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0120 15:19:51.079068       1 volume_store.go:212] Trying to save persistentvolume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d"
	I0120 15:19:51.097185       1 volume_store.go:219] persistentvolume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d" saved
	I0120 15:19:51.098028       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"96c4895a-c488-43c5-a463-cd611ace3f4d", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-96c4895a-c488-43c5-a463-cd611ace3f4d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-232451 -n functional-232451
helpers_test.go:261: (dbg) Run:  kubectl --context functional-232451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-cr6ch sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-232451 describe pod busybox-mount mysql-58ccfd96bb-cr6ch sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-232451 describe pod busybox-mount mysql-58ccfd96bb-cr6ch sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:19:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 20 Jan 2025 15:19:52 +0000
	      Finished:     Mon, 20 Jan 2025 15:19:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2j69 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-j2j69:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m3s  default-scheduler  Successfully assigned default/busybox-mount to functional-232451
	  Normal  Pulling    3m2s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m1s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.207s (1.207s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m1s  kubelet            Created container: mount-munger
	  Normal  Started    3m1s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-cr6ch
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:20:00 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk6bc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qk6bc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m53s                default-scheduler  Successfully assigned default/mysql-58ccfd96bb-cr6ch to functional-232451
	  Warning  Failed     50s (x2 over 112s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x2 over 112s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    35s (x2 over 111s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     35s (x2 over 111s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x3 over 2m52s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:19:51 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzr5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zzr5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-232451
	  Normal   Pulling    53s (x3 over 3m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     20s (x3 over 2m31s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     20s (x3 over 2m31s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x3 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x3 over 2m30s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0120 15:27:14.250283 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-232451 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-cr6ch" [aa0d9969-9457-4b02-ae8d-3d0f15808d66] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-232451 -n functional-232451
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-01-20 15:30:00.874872769 +0000 UTC m=+1520.162303937
functional_test.go:1799: (dbg) Run:  kubectl --context functional-232451 describe po mysql-58ccfd96bb-cr6ch -n default
functional_test.go:1799: (dbg) kubectl --context functional-232451 describe po mysql-58ccfd96bb-cr6ch -n default:
Name:             mysql-58ccfd96bb-cr6ch
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-232451/192.168.39.125
Start Time:       Mon, 20 Jan 2025 15:20:00 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk6bc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qk6bc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-58ccfd96bb-cr6ch to functional-232451
Normal   Pulling    4m (x5 over 9m59s)      kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     3m30s (x5 over 8m59s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     3m30s (x5 over 8m59s)   kubelet            Error: ErrImagePull
Warning  Failed     2m26s (x16 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    86s (x21 over 8m58s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1799: (dbg) Run:  kubectl --context functional-232451 logs mysql-58ccfd96bb-cr6ch -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-232451 logs mysql-58ccfd96bb-cr6ch -n default: exit status 1 (80.137347ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-cr6ch" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-232451 logs mysql-58ccfd96bb-cr6ch -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-232451 -n functional-232451
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 logs -n 25: (1.630634987s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                                                      | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	|                | -p functional-232451                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /usr/share/ca-certificates/2136749.pem                                  |                   |         |         |                     |                     |
	| image          | functional-232451 image load --daemon                                   | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/21367492.pem                                             |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /usr/share/ca-certificates/21367492.pem                                 |                   |         |         |                     |                     |
	| image          | functional-232451 image load --daemon                                   | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:19 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh sudo cat                                          | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	|                | /etc/test/nested/copy/2136749/hosts                                     |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:19 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451 image save kicbase/echo-server:functional-232451      | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451 image rm                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | kicbase/echo-server:functional-232451                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451 image load                                            | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-232451 ssh pgrep                                             | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-232451 image build -t                                        | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | localhost/my-image:functional-232451                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-232451 image ls                                              | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-232451                                                       | functional-232451 | jenkins | v1.35.0 | 20 Jan 25 15:20 UTC | 20 Jan 25 15:20 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:19:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:19:58.279246 2145870 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:19:58.279368 2145870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.279377 2145870 out.go:358] Setting ErrFile to fd 2...
	I0120 15:19:58.279381 2145870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.279582 2145870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:19:58.280118 2145870 out.go:352] Setting JSON to false
	I0120 15:19:58.281194 2145870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25344,"bootTime":1737361054,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:19:58.281316 2145870 start.go:139] virtualization: kvm guest
	I0120 15:19:58.283366 2145870 out.go:177] * [functional-232451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:19:58.285360 2145870 notify.go:220] Checking for updates...
	I0120 15:19:58.285382 2145870 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:19:58.286963 2145870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:19:58.288488 2145870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:19:58.289836 2145870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:19:58.202926 2145841 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.203353 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.203409 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.224216 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0120 15:19:58.224903 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.225621 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.225648 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.226111 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.226362 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.226761 2145841 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.227211 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.227278 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.247151 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0120 15:19:58.248682 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.249311 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.249336 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.249827 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.250043 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.292058 2145870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:19:58.292070 2145841 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 15:19:58.293588 2145870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:19:58.293487 2145841 start.go:297] selected driver: kvm2
	I0120 15:19:58.293506 2145841 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.293670 2145841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.295749 2145841 out.go:201] 
	W0120 15:19:58.297021 2145841 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 15:19:58.298375 2145841 out.go:201] 
	I0120 15:19:58.295448 2145870 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.296003 2145870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.296090 2145870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.316614 2145870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0120 15:19:58.317198 2145870 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.317863 2145870 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.317886 2145870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.318234 2145870 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.318447 2145870 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.318710 2145870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.319020 2145870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.319070 2145870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.336227 2145870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0120 15:19:58.336775 2145870 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.337417 2145870 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.337441 2145870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.337833 2145870 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.338198 2145870 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.377387 2145870 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 15:19:58.378881 2145870 start.go:297] selected driver: kvm2
	I0120 15:19:58.378901 2145870 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.379036 2145870 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.379966 2145870 cni.go:84] Creating CNI manager for ""
	I0120 15:19:58.380051 2145870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:19:58.380129 2145870 start.go:340] cluster config:
	{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.381870 2145870 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.748893038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737387001748868976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01f60d36-f6cd-4053-afd2-14d916347a1a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.749479907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce78c488-6263-4ea6-8480-4aa31460e4aa name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.749549715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce78c488-6263-4ea6-8480-4aa31460e4aa name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.749950966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce78c488-6263-4ea6-8480-4aa31460e4aa name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.794701599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad4a287a-1c4a-49ba-a8af-624933a492f0 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.794801430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad4a287a-1c4a-49ba-a8af-624933a492f0 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.795972473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4c38266-152a-4892-b00a-ea7cb51acd73 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.796722288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737387001796695392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4c38266-152a-4892-b00a-ea7cb51acd73 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.797354208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba44b75d-3104-40f4-9d32-97fc3bb89889 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.797428162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba44b75d-3104-40f4-9d32-97fc3bb89889 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.797888605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba44b75d-3104-40f4-9d32-97fc3bb89889 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.833162339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33d6d7ca-79d8-4f8c-92a0-de4f1e03ed61 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.833236100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33d6d7ca-79d8-4f8c-92a0-de4f1e03ed61 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.840991387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6de5beea-f316-4f12-9359-656cf6e5dedd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.841812241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737387001841775449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6de5beea-f316-4f12-9359-656cf6e5dedd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.842516198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=900c7e09-dc12-476f-9887-562e6c8f802f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.842656040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=900c7e09-dc12-476f-9887-562e6c8f802f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.843016288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=900c7e09-dc12-476f-9887-562e6c8f802f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.884280474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f42c4a9-e823-44e1-aec3-621325734b49 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.884351611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f42c4a9-e823-44e1-aec3-621325734b49 name=/runtime.v1.RuntimeService/Version
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.885741748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbb5550a-7869-40f1-bf33-cfc4678a2b26 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.886392425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737387001886370289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbb5550a-7869-40f1-bf33-cfc4678a2b26 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.887064108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b795620-2a4b-4b0f-85fb-d0603939ae9b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.887118426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b795620-2a4b-4b0f-85fb-d0603939ae9b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 15:30:01 functional-232451 crio[5087]: time="2025-01-20 15:30:01.887433229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:613877e7c4ed3b17b3a6e2036c2284ef5a6e555a4e92cdcff0aaea67b73ea321,PodSandboxId:285b1316f24c169106396ab21f0758ca53aabec705577250535b8fbb7f294372,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737386431176106449,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-krvlp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e686db64-56a3-456c-bb3b-f416d03c531d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841,PodSandboxId:b13143c58a5d1cd2ac46161a43882810684e02929d5b29bcec02e0077a22a508,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737386429075141890,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-cw4db,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 906f8ce4-0a29-4ee1-8b2f-93941726640a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed,PodSandboxId:4b9c1400dfa11370c821af26a3bbed9d01ce99e8375e714f10a67239fadb8819,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737386392392673881,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0e6cde-9a6b-4e88-9d49-dd51c9baa239,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a2e1a34dcb15900c8b37647593ff2451b4ca13a1b4d8aca04fb5fa23fb6463,PodSandboxId:74c0962198986c27094a96257ac75526bd31fbf77da8b469eb5c1249518502ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389911408439,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-fthjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1ca943d-f059-4c1b-8f13-95f6a0bb8c97,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401f292b4b96bcad2e140b3943a8758aa697de0996e5cb4e5170fcedda8eae16,PodSandboxId:f6ceac7b67767ad4bb2d44af5e5b3507782c0c33659a47836e98c7bc92d48792,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737386389800154156,Labels:map[strin
g]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-xwqfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 527fee39-02e6-465a-9d2c-d3b6ba261a85,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f,PodSandboxId:1ed581631233e33363b1950b3bc874eb4417090537a835a81f4d3dd55cec25ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737386362300445629,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113,PodSandboxId:c2c319137beb3a0d889206904fc5b414ad14e20fcf6fd9663f052a109fd999a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737386362294204919,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be,PodSandboxId:dccfac5d8eb6f432cedba3325ce3ec34f093e024727958540f3af32fc99db492,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737386362279496693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002,PodSandboxId:e9dcc4fd75f407005d63e388209b784bda5e84081a110b3a7b976f214c10ebb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d
0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737386358684872251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0e5222a69976d3a3eab2b1a1c39de6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78,PodSandboxId:ea1d014c7afcb11933861800f3e2e87b041359c08a2fc620ec5940a46b2c2d14,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737386358499331885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f,PodSandboxId:dbd6a3ba078ef0c569344a2cbeeac5888a57f607fa2f5f90a502ed088f6cad6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d
4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737386358490856504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e,PodSandboxId:16509e91e328b0d5291c4faf6662442c37af968ef90b58ae11d7c07108630185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012
587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737386358481292868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868,PodSandboxId:2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,St
ate:CONTAINER_EXITED,CreatedAt:1737386321584874368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gzbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8e92644-e915-4db4-a4a5-0b874ca2f0b0,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac,PodSandboxId:c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173738
6321593461927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xrqz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4312ac47-67f2-426c-af6d-49b4d0b5b4cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84,PodSandboxId:5dfbbec07e9eb3c3855de72a907e2319e03e21f60db5cd8dd0590dda59b3bce5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75
b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1737386317916133310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7863fe704f72ad93cda4d45c9a0ecf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a,PodSandboxId:d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737386317957068190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a49802a9b36d8b86e4dd9e29f7fc94,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346,PodSandboxId:9cd9e9eaacb5e2486ca9ea7d3ea4df5b23de0078cc98e8a3913e0c2ad9379e1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1737386315821231554,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-232451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ffca5bd7ac1fb8f611729873a67c29,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15,PodSandboxId:7c8a24f37c4fd434a92d943f1455c5a78607d275adb56806d13a9987bd0ee522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737386306719084269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01f9dcb-31af-4d37-b76d-1956eee4715e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b795620-2a4b-4b0f-85fb-d0603939ae9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	613877e7c4ed3       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   285b1316f24c1       dashboard-metrics-scraper-5d59dccf9b-krvlp
	0dbf2ec322001       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   b13143c58a5d1       kubernetes-dashboard-7779f9b69b-cw4db
	8eeacfc29d8db       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   4b9c1400dfa11       busybox-mount
	17a2e1a34dcb1       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   74c0962198986       hello-node-connect-58f9cf68d8-fthjw
	401f292b4b96b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   f6ceac7b67767       hello-node-fcfd88b6f-xwqfj
	65e0b3fea0b7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   1ed581631233e       storage-provisioner
	85aa044ade022       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     3                   c2c319137beb3       coredns-668d6bf9bc-xrqz6
	e8736db30f30e       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                 10 minutes ago      Running             kube-proxy                  3                   dccfac5d8eb6f       kube-proxy-gzbbh
	870e2c8872b03       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                 10 minutes ago      Running             kube-apiserver              0                   e9dcc4fd75f40       kube-apiserver-functional-232451
	54ca42054826e       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        3                   ea1d014c7afcb       etcd-functional-232451
	845b30ac779bc       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                 10 minutes ago      Running             kube-controller-manager     3                   dbd6a3ba078ef       kube-controller-manager-functional-232451
	88bf6f7a787a5       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                 10 minutes ago      Running             kube-scheduler              3                   16509e91e328b       kube-scheduler-functional-232451
	cff6ea6d7d7e8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     2                   c5020f3a85d33       coredns-668d6bf9bc-xrqz6
	f5612b5d32597       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                 11 minutes ago      Exited              kube-proxy                  2                   2de6f2d08610e       kube-proxy-gzbbh
	383d6ca5101be       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Exited              etcd                        2                   d34e94acbffae       etcd-functional-232451
	64578c99a681e       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                 11 minutes ago      Exited              kube-controller-manager     2                   5dfbbec07e9eb       kube-controller-manager-functional-232451
	d4369396c1ac4       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                 11 minutes ago      Exited              kube-scheduler              2                   9cd9e9eaacb5e       kube-scheduler-functional-232451
	49ca9ec35a778       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   7c8a24f37c4fd       storage-provisioner
	
	
	==> coredns [85aa044ade022eeffe8251bc270adc032b2d2a752aedf9cd7a103c5ade169113] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51309 - 29307 "HINFO IN 5224763676596627299.4304674001143364677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037708884s
	
	
	==> coredns [cff6ea6d7d7e8c76f7ad9135449365fd8e385337de2d22a00113c176be998eac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45630 - 53876 "HINFO IN 7361856938211515730.8614983633791882584. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.091314299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-232451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-232451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=functional-232451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T15_17_21_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 15:17:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-232451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 15:29:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 15:29:23 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 15:29:23 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 15:29:23 +0000   Mon, 20 Jan 2025 15:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 15:29:23 +0000   Mon, 20 Jan 2025 15:17:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    functional-232451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd597908e4074bb881b106874105bcbc
	  System UUID:                cd597908-e407-4bb8-81b1-06874105bcbc
	  Boot ID:                    e93c83e9-7011-4881-8bd8-17bf4e9c8097
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-fthjw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-xwqfj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-cr6ch                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-668d6bf9bc-xrqz6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-232451                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-232451              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-232451     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gzbbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-232451              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-krvlp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-cw4db         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-232451 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-232451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-232451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-232451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-232451 event: Registered Node functional-232451 in Controller
	
	
	==> dmesg <==
	[  +0.123353] systemd-fstab-generator[2511]: Ignoring "noauto" option for root device
	[  +0.298856] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +1.608897] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.393113] kauditd_printk_skb: 201 callbacks suppressed
	[ +19.569174] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[  +4.588248] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.741155] systemd-fstab-generator[4153]: Ignoring "noauto" option for root device
	[  +0.102919] kauditd_printk_skb: 4 callbacks suppressed
	[Jan20 15:19] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[  +0.072695] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.058350] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[  +0.169422] systemd-fstab-generator[5038]: Ignoring "noauto" option for root device
	[  +0.139679] systemd-fstab-generator[5050]: Ignoring "noauto" option for root device
	[  +0.286943] systemd-fstab-generator[5078]: Ignoring "noauto" option for root device
	[  +1.447823] systemd-fstab-generator[5205]: Ignoring "noauto" option for root device
	[  +2.567601] systemd-fstab-generator[5717]: Ignoring "noauto" option for root device
	[  +0.404868] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.941639] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.382909] systemd-fstab-generator[6250]: Ignoring "noauto" option for root device
	[  +6.548145] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.098098] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.460643] kauditd_printk_skb: 35 callbacks suppressed
	[  +8.534354] kauditd_printk_skb: 15 callbacks suppressed
	[Jan20 15:20] kauditd_printk_skb: 34 callbacks suppressed
	[Jan20 15:21] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [383d6ca5101be41fd7e61cc351a7c5d13fbd407918e8ccf39680bc6c21d6f07a] <==
	{"level":"info","ts":"2025-01-20T15:18:39.536435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 3"}
	{"level":"info","ts":"2025-01-20T15:18:39.536472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2025-01-20T15:18:39.536488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.536522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 4"}
	{"level":"info","ts":"2025-01-20T15:18:39.543281Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:functional-232451 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T15:18:39.543299Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:18:39.543320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:18:39.544136Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:18:39.544397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:18:39.544843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T15:18:39.545112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2025-01-20T15:18:39.545260Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T15:18:39.545290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T15:19:06.863205Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-01-20T15:19:06.863246Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-232451","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"]}
	{"level":"warn","ts":"2025-01-20T15:19:06.863303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.863373Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.947403Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.125:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-20T15:19:06.947526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.125:2379: use of closed network connection"}
	{"level":"info","ts":"2025-01-20T15:19:06.947660Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4d3edba9e42b28c","current-leader-member-id":"f4d3edba9e42b28c"}
	{"level":"info","ts":"2025-01-20T15:19:06.950847Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2025-01-20T15:19:06.950999Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2025-01-20T15:19:06.951027Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-232451","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"]}
	
	
	==> etcd [54ca42054826ea7f1e6513ba5cfabfa621fff32d37dca11870e60acb29ab6c78] <==
	{"level":"info","ts":"2025-01-20T15:19:20.440376Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T15:19:20.440488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T15:19:20.441195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:19:20.441804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T15:19:20.441199Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T15:19:20.442383Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"warn","ts":"2025-01-20T15:20:00.559195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.284229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:20:00.559478Z","caller":"traceutil/trace.go:171","msg":"trace[641265805] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:865; }","duration":"220.626259ms","start":"2025-01-20T15:20:00.338828Z","end":"2025-01-20T15:20:00.559454Z","steps":["trace[641265805] 'range keys from in-memory index tree'  (duration: 220.237335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:20:00.560270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.965623ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12865821531421816995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/default/mysql-rnfck\" mod_revision:0 > success:<request_put:<key:\"/registry/endpointslices/default/mysql-rnfck\" value_size:801 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-01-20T15:20:00.561891Z","caller":"traceutil/trace.go:171","msg":"trace[152791353] linearizableReadLoop","detail":"{readStateIndex:948; appliedIndex:947; }","duration":"209.36401ms","start":"2025-01-20T15:20:00.352512Z","end":"2025-01-20T15:20:00.561876Z","steps":["trace[152791353] 'read index received'  (duration: 55.935201ms)","trace[152791353] 'applied index is now lower than readState.Index'  (duration: 153.427589ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:20:00.561994Z","caller":"traceutil/trace.go:171","msg":"trace[2020145225] transaction","detail":"{read_only:false; response_revision:866; number_of_response:1; }","duration":"218.796936ms","start":"2025-01-20T15:20:00.343186Z","end":"2025-01-20T15:20:00.561983Z","steps":["trace[2020145225] 'process raft request'  (duration: 65.253987ms)","trace[2020145225] 'compare'  (duration: 150.417573ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T15:20:00.562120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.595088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2025-01-20T15:20:00.562502Z","caller":"traceutil/trace.go:171","msg":"trace[972187481] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:866; }","duration":"210.00022ms","start":"2025-01-20T15:20:00.352483Z","end":"2025-01-20T15:20:00.562484Z","steps":["trace[972187481] 'agreement among raft nodes before linearized reading'  (duration: 209.531681ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:00.564870Z","caller":"traceutil/trace.go:171","msg":"trace[421536517] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"162.985755ms","start":"2025-01-20T15:20:00.401875Z","end":"2025-01-20T15:20:00.564861Z","steps":["trace[421536517] 'process raft request'  (duration: 162.960382ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:00.565119Z","caller":"traceutil/trace.go:171","msg":"trace[1570227971] transaction","detail":"{read_only:false; response_revision:867; number_of_response:1; }","duration":"212.473986ms","start":"2025-01-20T15:20:00.352637Z","end":"2025-01-20T15:20:00.565111Z","steps":["trace[1570227971] 'process raft request'  (duration: 212.105806ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:20:28.604512Z","caller":"traceutil/trace.go:171","msg":"trace[1882643509] linearizableReadLoop","detail":"{readStateIndex:1001; appliedIndex:1000; }","duration":"293.5863ms","start":"2025-01-20T15:20:28.310911Z","end":"2025-01-20T15:20:28.604497Z","steps":["trace[1882643509] 'read index received'  (duration: 293.443781ms)","trace[1882643509] 'applied index is now lower than readState.Index'  (duration: 142.114µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T15:20:28.604654Z","caller":"traceutil/trace.go:171","msg":"trace[2011082731] transaction","detail":"{read_only:false; response_revision:912; number_of_response:1; }","duration":"313.03333ms","start":"2025-01-20T15:20:28.291612Z","end":"2025-01-20T15:20:28.604646Z","steps":["trace[2011082731] 'process raft request'  (duration: 312.764704ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:20:28.604829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:20:28.291555Z","time spent":"313.120726ms","remote":"127.0.0.1:34354","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:911 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-20T15:20:28.604885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.942367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T15:20:28.604919Z","caller":"traceutil/trace.go:171","msg":"trace[2140878658] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:912; }","duration":"294.042689ms","start":"2025-01-20T15:20:28.310868Z","end":"2025-01-20T15:20:28.604910Z","steps":["trace[2140878658] 'agreement among raft nodes before linearized reading'  (duration: 293.969366ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T15:21:02.106436Z","caller":"traceutil/trace.go:171","msg":"trace[1917006707] transaction","detail":"{read_only:false; response_revision:959; number_of_response:1; }","duration":"311.056462ms","start":"2025-01-20T15:21:01.795354Z","end":"2025-01-20T15:21:02.106411Z","steps":["trace[1917006707] 'process raft request'  (duration: 310.956206ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T15:21:02.107092Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:21:01.795274Z","time spent":"311.721675ms","remote":"127.0.0.1:34266","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":947,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/mysql-58ccfd96bb-cr6ch.181c6fc82c96f09e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/mysql-58ccfd96bb-cr6ch.181c6fc82c96f09e\" value_size:865 lease:3642449494567041384 >> failure:<>"}
	{"level":"info","ts":"2025-01-20T15:29:20.468649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1169}
	{"level":"info","ts":"2025-01-20T15:29:20.483246Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1169,"took":"14.208136ms","hash":3294110014,"current-db-size-bytes":4399104,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T15:29:20.483286Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3294110014,"revision":1169,"compact-revision":-1}
	
	
	==> kernel <==
	 15:30:02 up 13 min,  0 users,  load average: 0.16, 0.22, 0.19
	Linux functional-232451 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [870e2c8872b034af6bab95e75d5462b0a3d9cca71ecdffe6407df3dfa9ebd002] <==
	I0120 15:19:21.697749       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0120 15:19:21.698366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0120 15:19:21.708460       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0120 15:19:21.709121       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0120 15:19:21.709606       1 aggregator.go:171] initial CRD sync complete...
	I0120 15:19:21.709677       1 autoregister_controller.go:144] Starting autoregister controller
	I0120 15:19:21.709701       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0120 15:19:21.709791       1 cache.go:39] Caches are synced for autoregister controller
	I0120 15:19:21.711148       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0120 15:19:22.029184       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0120 15:19:22.496745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0120 15:19:23.189710       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0120 15:19:23.254041       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0120 15:19:23.307969       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 15:19:23.314094       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0120 15:19:24.824431       1 controller.go:615] quota admission added evaluator for: endpoints
	I0120 15:19:25.071990       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 15:19:41.003244       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.217.76"}
	I0120 15:19:45.479362       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0120 15:19:45.635978       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.197.235"}
	I0120 15:19:46.182828       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.109.169"}
	I0120 15:19:59.632771       1 controller.go:615] quota admission added evaluator for: namespaces
	I0120 15:20:00.063251       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.189.216"}
	I0120 15:20:00.126499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.65.10"}
	I0120 15:20:00.304942       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.63.30"}
	
	
	==> kube-controller-manager [64578c99a681ee855725fb039986dd3407122f269ef83fcf75828b1fea3dca84] <==
	I0120 15:18:44.014475       1 shared_informer.go:320] Caches are synced for PVC protection
	I0120 15:18:44.016627       1 shared_informer.go:320] Caches are synced for GC
	I0120 15:18:44.016734       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0120 15:18:44.016767       1 shared_informer.go:320] Caches are synced for daemon sets
	I0120 15:18:44.016795       1 shared_informer.go:320] Caches are synced for taint
	I0120 15:18:44.016812       1 shared_informer.go:320] Caches are synced for PV protection
	I0120 15:18:44.016855       1 shared_informer.go:320] Caches are synced for TTL
	I0120 15:18:44.016888       1 shared_informer.go:320] Caches are synced for stateful set
	I0120 15:18:44.016922       1 shared_informer.go:320] Caches are synced for crt configmap
	I0120 15:18:44.016945       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0120 15:18:44.016996       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0120 15:18:44.017332       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-232451"
	I0120 15:18:44.017403       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0120 15:18:44.019132       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0120 15:18:44.019155       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0120 15:18:44.019177       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0120 15:18:44.023533       1 shared_informer.go:320] Caches are synced for resource quota
	I0120 15:18:44.034296       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0120 15:18:44.044735       1 shared_informer.go:320] Caches are synced for garbage collector
	I0120 15:18:44.058069       1 shared_informer.go:320] Caches are synced for resource quota
	I0120 15:18:44.065441       1 shared_informer.go:320] Caches are synced for garbage collector
	I0120 15:18:44.065476       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0120 15:18:44.065484       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0120 15:18:44.430436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="423.380295ms"
	I0120 15:18:44.431627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="103.109µs"
	
	
	==> kube-controller-manager [845b30ac779bc954d2b7b0239e4cf0c849670a4a7ecb3d9968992aad336fdc1f] <==
	I0120 15:20:00.033095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="50.594305ms"
	I0120 15:20:00.033182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.079µs"
	I0120 15:20:00.613281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="40.492907ms"
	I0120 15:20:00.631862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="18.545237ms"
	I0120 15:20:00.631998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="52.581µs"
	I0120 15:20:00.659972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="49.052µs"
	I0120 15:20:22.706913       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:20:29.889804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.076754ms"
	I0120 15:20:29.891999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="83.646µs"
	I0120 15:20:31.901774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.74545ms"
	I0120 15:20:31.901887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="35.434µs"
	I0120 15:20:53.137474       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:21:02.124999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="41.714µs"
	I0120 15:21:16.045842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="81.135µs"
	I0120 15:22:18.051398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="140.924µs"
	I0120 15:22:30.043939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="44.202µs"
	I0120 15:23:18.044326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="59.556µs"
	I0120 15:23:33.045747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="318.29µs"
	I0120 15:24:17.718978       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:24:42.048013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="63.118µs"
	I0120 15:24:54.045120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="477.357µs"
	I0120 15:26:45.046835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="256.89µs"
	I0120 15:26:56.051168       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="49.843µs"
	I0120 15:29:23.194629       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-232451"
	I0120 15:30:02.047073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="53.116µs"
	
	
	==> kube-proxy [e8736db30f30e6434210edced889f6b2fcd14b6312dbaa310fe3c8aacdee66be] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:19:22.624899       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:19:22.634226       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0120 15:19:22.634934       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:19:22.699734       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:19:22.699787       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:19:22.699810       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:19:22.703766       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:19:22.704350       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:19:22.704379       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:19:22.706440       1 config.go:199] "Starting service config controller"
	I0120 15:19:22.706487       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:19:22.706506       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:19:22.706509       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:19:22.707175       1 config.go:329] "Starting node config controller"
	I0120 15:19:22.707204       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:19:22.806651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:19:22.806733       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:19:22.807399       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f5612b5d325971d1ce9a183b8f12ecdc130d84b916a54478187891ff4c6cc868] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 15:18:41.849644       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 15:18:41.865270       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0120 15:18:41.865346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 15:18:41.916237       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 15:18:41.916286       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 15:18:41.916311       1 server_linux.go:170] "Using iptables Proxier"
	I0120 15:18:41.919632       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 15:18:41.919839       1 server.go:497] "Version info" version="v1.32.0"
	I0120 15:18:41.919869       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:18:41.921374       1 config.go:199] "Starting service config controller"
	I0120 15:18:41.921424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 15:18:41.921452       1 config.go:105] "Starting endpoint slice config controller"
	I0120 15:18:41.921457       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 15:18:41.922208       1 config.go:329] "Starting node config controller"
	I0120 15:18:41.922238       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 15:18:42.021895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 15:18:42.021963       1 shared_informer.go:320] Caches are synced for service config
	I0120 15:18:42.022372       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [88bf6f7a787a5a6d1cc4db70035e48790b3a7b1110c365795a6967ab3897655e] <==
	I0120 15:19:19.356173       1 serving.go:386] Generated self-signed cert in-memory
	W0120 15:19:21.587484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 15:19:21.587668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 15:19:21.587699       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 15:19:21.587777       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 15:19:21.635852       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0120 15:19:21.635971       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 15:19:21.654955       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0120 15:19:21.656973       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0120 15:19:21.657000       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0120 15:19:21.657209       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 15:19:21.666220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d4369396c1ac49d71af7264f7eb046bc81d33c5c139444b753454965b4de4346] <==
	E0120 15:18:40.793012       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793217       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 15:18:40.793314       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793459       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.793625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.793761       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 15:18:40.793875       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794084       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.794201       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794335       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 15:18:40.794441       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794629       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 15:18:40.794737       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.794880       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 15:18:40.796728       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.797271       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E0120 15:18:40.797388       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError"
	W0120 15:18:40.797499       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0120 15:18:40.797539       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 15:18:40.798124       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 15:18:40.797625       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 15:18:40.798266       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0120 15:18:40.798371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0120 15:18:42.271286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0120 15:19:06.842734       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 20 15:29:18 functional-232451 kubelet[5724]: E0120 15:29:18.081720    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod4312ac47-67f2-426c-af6d-49b4d0b5b4cf/crio-c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8: Error finding container c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8: Status 404 returned error can't find the container with id c5020f3a85d33e71061471c0edb21f00f9be5c8f6f23f2053af9c13058c95df8
	Jan 20 15:29:18 functional-232451 kubelet[5724]: E0120 15:29:18.082006    5724 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode8e92644-e915-4db4-a4a5-0b874ca2f0b0/crio-2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027: Error finding container 2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027: Status 404 returned error can't find the container with id 2de6f2d08610eee3f1da9a8b26a01e39662b18647ddfad6b728182d42ae82027
	Jan 20 15:29:18 functional-232451 kubelet[5724]: E0120 15:29:18.082287    5724 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode7a49802a9b36d8b86e4dd9e29f7fc94/crio-d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662: Error finding container d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662: Status 404 returned error can't find the container with id d34e94acbffaeb44a49b9b606290b2dd184c00fcf32b4b011684bef2e8dbb662
	Jan 20 15:29:18 functional-232451 kubelet[5724]: E0120 15:29:18.323699    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386958323107071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:18 functional-232451 kubelet[5724]: E0120 15:29:18.323744    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386958323107071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:20 functional-232451 kubelet[5724]: E0120 15:29:20.681635    5724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:29:20 functional-232451 kubelet[5724]: E0120 15:29:20.681703    5724 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jan 20 15:29:20 functional-232451 kubelet[5724]: E0120 15:29:20.681941    5724 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzr5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(0f851fb5-106f-46d9-8980-3efab3f8ae05): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 20 15:29:20 functional-232451 kubelet[5724]: E0120 15:29:20.685512    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:29:28 functional-232451 kubelet[5724]: E0120 15:29:28.327337    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386968326788075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:28 functional-232451 kubelet[5724]: E0120 15:29:28.327872    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386968326788075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:35 functional-232451 kubelet[5724]: E0120 15:29:35.028508    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:29:38 functional-232451 kubelet[5724]: E0120 15:29:38.329937    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386978329431527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:38 functional-232451 kubelet[5724]: E0120 15:29:38.330301    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386978329431527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:48 functional-232451 kubelet[5724]: E0120 15:29:48.332669    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386988332163344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:48 functional-232451 kubelet[5724]: E0120 15:29:48.332939    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386988332163344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:49 functional-232451 kubelet[5724]: E0120 15:29:49.028738    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:29:51 functional-232451 kubelet[5724]: E0120 15:29:51.342899    5724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jan 20 15:29:51 functional-232451 kubelet[5724]: E0120 15:29:51.342962    5724 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jan 20 15:29:51 functional-232451 kubelet[5724]: E0120 15:29:51.343391    5724 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qk6bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-cr6ch_default(aa0d9969-9457-4b02-ae8d-3d0f15808d66): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 20 15:29:51 functional-232451 kubelet[5724]: E0120 15:29:51.344761    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-cr6ch" podUID="aa0d9969-9457-4b02-ae8d-3d0f15808d66"
	Jan 20 15:29:58 functional-232451 kubelet[5724]: E0120 15:29:58.337556    5724 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386998336630634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:29:58 functional-232451 kubelet[5724]: E0120 15:29:58.337668    5724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737386998336630634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225021,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 15:30:00 functional-232451 kubelet[5724]: E0120 15:30:00.027705    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f851fb5-106f-46d9-8980-3efab3f8ae05"
	Jan 20 15:30:02 functional-232451 kubelet[5724]: E0120 15:30:02.032103    5724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-cr6ch" podUID="aa0d9969-9457-4b02-ae8d-3d0f15808d66"
	
	
	==> kubernetes-dashboard [0dbf2ec322001e5a2404348242a64f96e954441bda0b392078ba9ef738641841] <==
	2025/01/20 15:20:29 Starting overwatch
	2025/01/20 15:20:29 Using namespace: kubernetes-dashboard
	2025/01/20 15:20:29 Using in-cluster config to connect to apiserver
	2025/01/20 15:20:29 Using secret token for csrf signing
	2025/01/20 15:20:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/20 15:20:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/20 15:20:29 Successful initial request to the apiserver, version: v1.32.0
	2025/01/20 15:20:29 Generating JWE encryption key
	2025/01/20 15:20:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/20 15:20:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/20 15:20:29 Initializing JWE encryption key from synchronized object
	2025/01/20 15:20:29 Creating in-cluster Sidecar client
	2025/01/20 15:20:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 15:20:29 Serving insecurely on HTTP port: 9090
	2025/01/20 15:20:59 Successful request to sidecar
	
	
	==> storage-provisioner [49ca9ec35a778cb83880c19ec444dddb8fe6a69a8add3d23bff9d5c5f8555d15] <==
	I0120 15:18:26.787999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:18:26.795905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:18:26.795959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0120 15:18:36.763915       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0120 15:18:58.403700       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:18:58.404176       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c1d9817-72a3-4111-8123-202e6b17ab9e", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9 became leader
	I0120 15:18:58.404482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9!
	I0120 15:18:58.505233       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-232451_dc898177-e9a5-41a3-8e1f-ee16fe6a62f9!
	
	
	==> storage-provisioner [65e0b3fea0b7db2373320d243db87192499334db5c7346090c3d48cdcce9495f] <==
	I0120 15:19:22.475145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 15:19:22.510933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 15:19:22.511062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 15:19:39.914424       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 15:19:39.914607       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250!
	I0120 15:19:39.914965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c1d9817-72a3-4111-8123-202e6b17ab9e", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250 became leader
	I0120 15:19:40.015888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-232451_9d59ce4e-47c8-4cf6-92d0-de7c4ad05250!
	I0120 15:19:51.075092       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0120 15:19:51.076957       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"96c4895a-c488-43c5-a463-cd611ace3f4d", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0120 15:19:51.075202       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    467d9dc3-6d7f-461d-ad0c-38d5b4a52ae7 337 0 2025-01-20 15:17:26 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-20 15:17:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-96c4895a-c488-43c5-a463-cd611ace3f4d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  96c4895a-c488-43c5-a463-cd611ace3f4d 778 0 2025-01-20 15:19:51 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-01-20 15:19:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-20 15:19:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0120 15:19:51.078957       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d" provisioned
	I0120 15:19:51.079056       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0120 15:19:51.079068       1 volume_store.go:212] Trying to save persistentvolume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d"
	I0120 15:19:51.097185       1 volume_store.go:219] persistentvolume "pvc-96c4895a-c488-43c5-a463-cd611ace3f4d" saved
	I0120 15:19:51.098028       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"96c4895a-c488-43c5-a463-cd611ace3f4d", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-96c4895a-c488-43c5-a463-cd611ace3f4d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-232451 -n functional-232451
helpers_test.go:261: (dbg) Run:  kubectl --context functional-232451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-cr6ch sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-232451 describe pod busybox-mount mysql-58ccfd96bb-cr6ch sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-232451 describe pod busybox-mount mysql-58ccfd96bb-cr6ch sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:19:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://8eeacfc29d8dbca989b2eeb8c6916d402c46373f3c1ee0941d2f3648619d4bed
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 20 Jan 2025 15:19:52 +0000
	      Finished:     Mon, 20 Jan 2025 15:19:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2j69 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-j2j69:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-232451
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.207s (1.207s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-cr6ch
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:20:00 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk6bc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qk6bc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-cr6ch to functional-232451
	  Normal   Pulling    4m3s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m33s (x5 over 9m2s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m33s (x5 over 9m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m29s (x16 over 9m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x24 over 9m1s)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-232451/192.168.39.125
	Start Time:       Mon, 20 Jan 2025 15:19:51 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzr5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zzr5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/sp-pod to functional-232451
	  Normal   Pulling    4m35s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m5s (x5 over 9m41s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m5s (x5 over 9m41s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m58s (x16 over 9m40s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x26 over 9m40s)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (603.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (440.618829ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image load --daemon kicbase/echo-server:functional-232451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-232451" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image load --daemon kicbase/echo-server:functional-232451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-232451" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (492.771742ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image save kicbase/echo-server:functional-232451 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0120 15:20:01.688530 2146407 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:20:01.688672 2146407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:20:01.688681 2146407 out.go:358] Setting ErrFile to fd 2...
	I0120 15:20:01.688685 2146407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:20:01.688859 2146407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:20:01.689481 2146407 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:20:01.689589 2146407 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:20:01.689966 2146407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:20:01.690021 2146407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:20:01.706955 2146407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40489
	I0120 15:20:01.707471 2146407 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:20:01.708072 2146407 main.go:141] libmachine: Using API Version  1
	I0120 15:20:01.708104 2146407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:20:01.708505 2146407 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:20:01.708844 2146407 main.go:141] libmachine: (functional-232451) Calling .GetState
	I0120 15:20:01.710842 2146407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:20:01.710886 2146407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:20:01.728054 2146407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0120 15:20:01.728628 2146407 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:20:01.729185 2146407 main.go:141] libmachine: Using API Version  1
	I0120 15:20:01.729209 2146407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:20:01.729530 2146407 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:20:01.729729 2146407 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:20:01.729989 2146407 ssh_runner.go:195] Run: systemctl --version
	I0120 15:20:01.730029 2146407 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
	I0120 15:20:01.733772 2146407 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
	I0120 15:20:01.734292 2146407 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
	I0120 15:20:01.734329 2146407 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
	I0120 15:20:01.734460 2146407 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
	I0120 15:20:01.734684 2146407 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
	I0120 15:20:01.734903 2146407 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
	I0120 15:20:01.735094 2146407 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
	I0120 15:20:01.813563 2146407 cache_images.go:289] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	W0120 15:20:01.813642 2146407 cache_images.go:253] Failed to load cached images for "functional-232451": loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0120 15:20:01.813678 2146407 cache_images.go:265] failed pushing to: functional-232451
	I0120 15:20:01.813705 2146407 main.go:141] libmachine: Making call to close driver server
	I0120 15:20:01.813717 2146407 main.go:141] libmachine: (functional-232451) Calling .Close
	I0120 15:20:01.814053 2146407 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:20:01.814073 2146407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 15:20:01.814085 2146407 main.go:141] libmachine: Making call to close driver server
	I0120 15:20:01.814111 2146407 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
	I0120 15:20:01.814173 2146407 main.go:141] libmachine: (functional-232451) Calling .Close
	I0120 15:20:01.814426 2146407 main.go:141] libmachine: Successfully made call to close driver server
	I0120 15:20:01.814445 2146407 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-232451
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-232451: exit status 1 (17.407275ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-232451

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-232451

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestPreload (287.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-894142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0120 16:12:14.251422 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-894142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.146057995s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-894142 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-894142 image pull gcr.io/k8s-minikube/busybox: (1.439076075s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-894142
E0120 16:14:45.663974 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-894142: (1m31.009840435s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-894142 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-894142 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.775585869s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-894142 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-20 16:16:39.828751986 +0000 UTC m=+4319.116183166
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-894142 -n test-preload-894142
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-894142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-894142 logs -n 25: (1.168609781s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-647253 ssh -n                                                                 | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | multinode-647253-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-647253 ssh -n multinode-647253 sudo cat                                       | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | /home/docker/cp-test_multinode-647253-m03_multinode-647253.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-647253 cp multinode-647253-m03:/home/docker/cp-test.txt                       | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | multinode-647253-m02:/home/docker/cp-test_multinode-647253-m03_multinode-647253-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-647253 ssh -n                                                                 | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | multinode-647253-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-647253 ssh -n multinode-647253-m02 sudo cat                                   | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | /home/docker/cp-test_multinode-647253-m03_multinode-647253-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-647253 node stop m03                                                          | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	| node    | multinode-647253 node start                                                             | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-647253                                                                | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC |                     |
	| stop    | -p multinode-647253                                                                     | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:00 UTC | 20 Jan 25 16:03 UTC |
	| start   | -p multinode-647253                                                                     | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:03 UTC | 20 Jan 25 16:06 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-647253                                                                | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:06 UTC |                     |
	| node    | multinode-647253 node delete                                                            | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:06 UTC | 20 Jan 25 16:06 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-647253 stop                                                                   | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:06 UTC | 20 Jan 25 16:09 UTC |
	| start   | -p multinode-647253                                                                     | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:09 UTC | 20 Jan 25 16:11 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-647253                                                                | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC |                     |
	| start   | -p multinode-647253-m02                                                                 | multinode-647253-m02 | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-647253-m03                                                                 | multinode-647253-m03 | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC | 20 Jan 25 16:11 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-647253                                                                 | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC |                     |
	| delete  | -p multinode-647253-m03                                                                 | multinode-647253-m03 | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC | 20 Jan 25 16:11 UTC |
	| delete  | -p multinode-647253                                                                     | multinode-647253     | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC | 20 Jan 25 16:11 UTC |
	| start   | -p test-preload-894142                                                                  | test-preload-894142  | jenkins | v1.35.0 | 20 Jan 25 16:11 UTC | 20 Jan 25 16:14 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-894142 image pull                                                          | test-preload-894142  | jenkins | v1.35.0 | 20 Jan 25 16:14 UTC | 20 Jan 25 16:14 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-894142                                                                  | test-preload-894142  | jenkins | v1.35.0 | 20 Jan 25 16:14 UTC | 20 Jan 25 16:15 UTC |
	| start   | -p test-preload-894142                                                                  | test-preload-894142  | jenkins | v1.35.0 | 20 Jan 25 16:15 UTC | 20 Jan 25 16:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-894142 image list                                                          | test-preload-894142  | jenkins | v1.35.0 | 20 Jan 25 16:16 UTC | 20 Jan 25 16:16 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:15:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:15:36.875434 2170965 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:15:36.875573 2170965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:15:36.875585 2170965 out.go:358] Setting ErrFile to fd 2...
	I0120 16:15:36.875592 2170965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:15:36.875784 2170965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:15:36.876373 2170965 out.go:352] Setting JSON to false
	I0120 16:15:36.877369 2170965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":28683,"bootTime":1737361054,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:15:36.877502 2170965 start.go:139] virtualization: kvm guest
	I0120 16:15:36.879682 2170965 out.go:177] * [test-preload-894142] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:15:36.880994 2170965 notify.go:220] Checking for updates...
	I0120 16:15:36.881024 2170965 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:15:36.882446 2170965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:15:36.883898 2170965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:15:36.885290 2170965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:15:36.886783 2170965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:15:36.888616 2170965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:15:36.890405 2170965 config.go:182] Loaded profile config "test-preload-894142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 16:15:36.890889 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:15:36.890942 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:15:36.906663 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0120 16:15:36.907210 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:15:36.907913 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:15:36.907934 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:15:36.908335 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:15:36.908551 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:15:36.910423 2170965 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 16:15:36.911707 2170965 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:15:36.912038 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:15:36.912093 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:15:36.927647 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0120 16:15:36.928123 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:15:36.928642 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:15:36.928664 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:15:36.929019 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:15:36.929239 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:15:36.967690 2170965 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 16:15:36.968954 2170965 start.go:297] selected driver: kvm2
	I0120 16:15:36.968970 2170965 start.go:901] validating driver "kvm2" against &{Name:test-preload-894142 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-894142
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:15:36.969113 2170965 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:15:36.969896 2170965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:15:36.970000 2170965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:15:36.986025 2170965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:15:36.986409 2170965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:15:36.986445 2170965 cni.go:84] Creating CNI manager for ""
	I0120 16:15:36.986503 2170965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:15:36.986553 2170965 start.go:340] cluster config:
	{Name:test-preload-894142 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-894142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:15:36.986689 2170965 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:15:36.989708 2170965 out.go:177] * Starting "test-preload-894142" primary control-plane node in "test-preload-894142" cluster
	I0120 16:15:36.991315 2170965 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 16:15:37.022408 2170965 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 16:15:37.022451 2170965 cache.go:56] Caching tarball of preloaded images
	I0120 16:15:37.022673 2170965 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 16:15:37.024588 2170965 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0120 16:15:37.026244 2170965 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 16:15:37.065884 2170965 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 16:15:39.720065 2170965 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 16:15:39.720171 2170965 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 16:15:40.587701 2170965 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0120 16:15:40.587847 2170965 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/config.json ...
	I0120 16:15:40.588102 2170965 start.go:360] acquireMachinesLock for test-preload-894142: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:15:40.588184 2170965 start.go:364] duration metric: took 54.135µs to acquireMachinesLock for "test-preload-894142"
	I0120 16:15:40.588216 2170965 start.go:96] Skipping create...Using existing machine configuration
	I0120 16:15:40.588227 2170965 fix.go:54] fixHost starting: 
	I0120 16:15:40.588520 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:15:40.588566 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:15:40.604624 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0120 16:15:40.605119 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:15:40.605653 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:15:40.605690 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:15:40.606113 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:15:40.606338 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:15:40.606521 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetState
	I0120 16:15:40.608230 2170965 fix.go:112] recreateIfNeeded on test-preload-894142: state=Stopped err=<nil>
	I0120 16:15:40.608267 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	W0120 16:15:40.608435 2170965 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 16:15:40.611538 2170965 out.go:177] * Restarting existing kvm2 VM for "test-preload-894142" ...
	I0120 16:15:40.613049 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Start
	I0120 16:15:40.613298 2170965 main.go:141] libmachine: (test-preload-894142) starting domain...
	I0120 16:15:40.613322 2170965 main.go:141] libmachine: (test-preload-894142) ensuring networks are active...
	I0120 16:15:40.614145 2170965 main.go:141] libmachine: (test-preload-894142) Ensuring network default is active
	I0120 16:15:40.614500 2170965 main.go:141] libmachine: (test-preload-894142) Ensuring network mk-test-preload-894142 is active
	I0120 16:15:40.614838 2170965 main.go:141] libmachine: (test-preload-894142) getting domain XML...
	I0120 16:15:40.615636 2170965 main.go:141] libmachine: (test-preload-894142) creating domain...
	I0120 16:15:41.854674 2170965 main.go:141] libmachine: (test-preload-894142) waiting for IP...
	I0120 16:15:41.855647 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:41.855996 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:41.856092 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:41.855979 2171017 retry.go:31] will retry after 279.911717ms: waiting for domain to come up
	I0120 16:15:42.137659 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:42.138205 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:42.138244 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:42.138192 2171017 retry.go:31] will retry after 286.33605ms: waiting for domain to come up
	I0120 16:15:42.425830 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:42.426321 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:42.426351 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:42.426239 2171017 retry.go:31] will retry after 332.163957ms: waiting for domain to come up
	I0120 16:15:42.759834 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:42.760252 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:42.760287 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:42.760224 2171017 retry.go:31] will retry after 594.869391ms: waiting for domain to come up
	I0120 16:15:43.357273 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:43.357814 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:43.357848 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:43.357761 2171017 retry.go:31] will retry after 758.021314ms: waiting for domain to come up
	I0120 16:15:44.117975 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:44.118472 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:44.118506 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:44.118430 2171017 retry.go:31] will retry after 595.347326ms: waiting for domain to come up
	I0120 16:15:44.715423 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:44.715845 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:44.715875 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:44.715792 2171017 retry.go:31] will retry after 1.056433967s: waiting for domain to come up
	I0120 16:15:45.773644 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:45.774169 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:45.774203 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:45.774145 2171017 retry.go:31] will retry after 1.406422871s: waiting for domain to come up
	I0120 16:15:47.182850 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:47.183263 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:47.183289 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:47.183249 2171017 retry.go:31] will retry after 1.337418252s: waiting for domain to come up
	I0120 16:15:48.522826 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:48.523233 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:48.523279 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:48.523194 2171017 retry.go:31] will retry after 2.04713874s: waiting for domain to come up
	I0120 16:15:50.571788 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:50.572246 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:50.572271 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:50.572200 2171017 retry.go:31] will retry after 2.085131209s: waiting for domain to come up
	I0120 16:15:52.660067 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:52.660414 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:52.660441 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:52.660382 2171017 retry.go:31] will retry after 2.61190125s: waiting for domain to come up
	I0120 16:15:55.275233 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:55.275632 2170965 main.go:141] libmachine: (test-preload-894142) DBG | unable to find current IP address of domain test-preload-894142 in network mk-test-preload-894142
	I0120 16:15:55.275658 2170965 main.go:141] libmachine: (test-preload-894142) DBG | I0120 16:15:55.275595 2171017 retry.go:31] will retry after 4.377770924s: waiting for domain to come up
	I0120 16:15:59.658558 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.659123 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has current primary IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.659152 2170965 main.go:141] libmachine: (test-preload-894142) found domain IP: 192.168.39.107
	I0120 16:15:59.659167 2170965 main.go:141] libmachine: (test-preload-894142) reserving static IP address...
	I0120 16:15:59.659645 2170965 main.go:141] libmachine: (test-preload-894142) reserved static IP address 192.168.39.107 for domain test-preload-894142
	I0120 16:15:59.659674 2170965 main.go:141] libmachine: (test-preload-894142) waiting for SSH...
	I0120 16:15:59.659696 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "test-preload-894142", mac: "52:54:00:a6:44:11", ip: "192.168.39.107"} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:15:59.659718 2170965 main.go:141] libmachine: (test-preload-894142) DBG | skip adding static IP to network mk-test-preload-894142 - found existing host DHCP lease matching {name: "test-preload-894142", mac: "52:54:00:a6:44:11", ip: "192.168.39.107"}
	I0120 16:15:59.659736 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Getting to WaitForSSH function...
	I0120 16:15:59.662022 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.662477 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:15:59.662506 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.662579 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Using SSH client type: external
	I0120 16:15:59.662624 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa (-rw-------)
	I0120 16:15:59.662658 2170965 main.go:141] libmachine: (test-preload-894142) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:15:59.662669 2170965 main.go:141] libmachine: (test-preload-894142) DBG | About to run SSH command:
	I0120 16:15:59.662678 2170965 main.go:141] libmachine: (test-preload-894142) DBG | exit 0
	I0120 16:15:59.791524 2170965 main.go:141] libmachine: (test-preload-894142) DBG | SSH cmd err, output: <nil>: 
	I0120 16:15:59.791937 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetConfigRaw
	I0120 16:15:59.792676 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetIP
	I0120 16:15:59.795541 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.795887 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:15:59.795918 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.796121 2170965 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/config.json ...
	I0120 16:15:59.796370 2170965 machine.go:93] provisionDockerMachine start ...
	I0120 16:15:59.796390 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:15:59.796676 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:15:59.799214 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.799512 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:15:59.799546 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.799777 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:15:59.799970 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:15:59.800194 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:15:59.800398 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:15:59.800618 2170965 main.go:141] libmachine: Using SSH client type: native
	I0120 16:15:59.800891 2170965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0120 16:15:59.800906 2170965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 16:15:59.911762 2170965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 16:15:59.911798 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetMachineName
	I0120 16:15:59.912130 2170965 buildroot.go:166] provisioning hostname "test-preload-894142"
	I0120 16:15:59.912168 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetMachineName
	I0120 16:15:59.912356 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:15:59.915696 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.916107 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:15:59.916140 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:15:59.916352 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:15:59.916559 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:15:59.916752 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:15:59.916869 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:15:59.917069 2170965 main.go:141] libmachine: Using SSH client type: native
	I0120 16:15:59.917307 2170965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0120 16:15:59.917327 2170965 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-894142 && echo "test-preload-894142" | sudo tee /etc/hostname
	I0120 16:16:00.042945 2170965 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-894142
	
	I0120 16:16:00.042979 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:00.046423 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.046938 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:00.046968 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.047207 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:00.047390 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:00.047487 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:00.047563 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:00.047667 2170965 main.go:141] libmachine: Using SSH client type: native
	I0120 16:16:00.047904 2170965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0120 16:16:00.047926 2170965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-894142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-894142/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-894142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:16:00.168751 2170965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:16:00.168796 2170965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:16:00.168837 2170965 buildroot.go:174] setting up certificates
	I0120 16:16:00.168854 2170965 provision.go:84] configureAuth start
	I0120 16:16:00.168871 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetMachineName
	I0120 16:16:00.169241 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetIP
	I0120 16:16:00.172109 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.172448 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:00.172516 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.172584 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:00.174817 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.175253 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:00.175286 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.175433 2170965 provision.go:143] copyHostCerts
	I0120 16:16:00.175516 2170965 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:16:00.175545 2170965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:16:00.175620 2170965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:16:00.175715 2170965 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:16:00.175723 2170965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:16:00.175745 2170965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:16:00.175800 2170965 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:16:00.175811 2170965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:16:00.175830 2170965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:16:00.175876 2170965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.test-preload-894142 san=[127.0.0.1 192.168.39.107 localhost minikube test-preload-894142]
	I0120 16:16:00.669501 2170965 provision.go:177] copyRemoteCerts
	I0120 16:16:00.669567 2170965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:16:00.669595 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:00.672718 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.673083 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:00.673108 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.673325 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:00.673535 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:00.673667 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:00.673764 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:00.763523 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:16:00.788571 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 16:16:00.815082 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:16:00.840760 2170965 provision.go:87] duration metric: took 671.888288ms to configureAuth
	I0120 16:16:00.840796 2170965 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:16:00.841006 2170965 config.go:182] Loaded profile config "test-preload-894142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 16:16:00.841094 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:00.844395 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.844766 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:00.844792 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:00.844987 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:00.845271 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:00.845481 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:00.845628 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:00.845797 2170965 main.go:141] libmachine: Using SSH client type: native
	I0120 16:16:00.846010 2170965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0120 16:16:00.846028 2170965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:16:01.078857 2170965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:16:01.078893 2170965 machine.go:96] duration metric: took 1.282507963s to provisionDockerMachine
	I0120 16:16:01.078906 2170965 start.go:293] postStartSetup for "test-preload-894142" (driver="kvm2")
	I0120 16:16:01.078922 2170965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:16:01.078946 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:01.079374 2170965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:16:01.079428 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:01.082368 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.082813 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:01.082846 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.082999 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:01.083218 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:01.083391 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:01.083540 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:01.169991 2170965 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:16:01.174671 2170965 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:16:01.174715 2170965 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:16:01.174804 2170965 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:16:01.174908 2170965 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:16:01.175037 2170965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:16:01.185829 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:16:01.211784 2170965 start.go:296] duration metric: took 132.858955ms for postStartSetup
	I0120 16:16:01.211856 2170965 fix.go:56] duration metric: took 20.623629332s for fixHost
	I0120 16:16:01.211881 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:01.214834 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.215139 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:01.215173 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.215351 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:01.215596 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:01.215799 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:01.215947 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:01.216137 2170965 main.go:141] libmachine: Using SSH client type: native
	I0120 16:16:01.216317 2170965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0120 16:16:01.216327 2170965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:16:01.327931 2170965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737389761.299006646
	
	I0120 16:16:01.327965 2170965 fix.go:216] guest clock: 1737389761.299006646
	I0120 16:16:01.327975 2170965 fix.go:229] Guest: 2025-01-20 16:16:01.299006646 +0000 UTC Remote: 2025-01-20 16:16:01.211860705 +0000 UTC m=+24.377778918 (delta=87.145941ms)
	I0120 16:16:01.328012 2170965 fix.go:200] guest clock delta is within tolerance: 87.145941ms
	I0120 16:16:01.328018 2170965 start.go:83] releasing machines lock for "test-preload-894142", held for 20.739819764s
	I0120 16:16:01.328039 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:01.328349 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetIP
	I0120 16:16:01.331336 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.331754 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:01.331778 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.331950 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:01.332537 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:01.332774 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:01.332886 2170965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:16:01.332957 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:01.333044 2170965 ssh_runner.go:195] Run: cat /version.json
	I0120 16:16:01.333079 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:01.335960 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.336310 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:01.336346 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.336367 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.336526 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:01.336712 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:01.336809 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:01.336849 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:01.336909 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:01.336987 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:01.337091 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:01.337179 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:01.337315 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:01.337461 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:01.416664 2170965 ssh_runner.go:195] Run: systemctl --version
	I0120 16:16:01.443268 2170965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:16:01.588877 2170965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:16:01.595646 2170965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:16:01.595740 2170965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:16:01.614193 2170965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:16:01.614229 2170965 start.go:495] detecting cgroup driver to use...
	I0120 16:16:01.614302 2170965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:16:01.631448 2170965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:16:01.646657 2170965 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:16:01.646723 2170965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:16:01.661727 2170965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:16:01.676895 2170965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:16:01.789471 2170965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:16:01.931459 2170965 docker.go:233] disabling docker service ...
	I0120 16:16:01.931596 2170965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:16:01.947570 2170965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:16:01.962036 2170965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:16:02.103180 2170965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:16:02.227648 2170965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:16:02.243052 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:16:02.263970 2170965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0120 16:16:02.264065 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.276076 2170965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:16:02.276155 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.288067 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.299790 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.311144 2170965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:16:02.322922 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.334618 2170965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.353927 2170965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:16:02.365710 2170965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:16:02.376278 2170965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:16:02.376365 2170965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:16:02.391158 2170965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:16:02.401955 2170965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:16:02.523471 2170965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:16:02.617433 2170965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:16:02.617530 2170965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:16:02.622767 2170965 start.go:563] Will wait 60s for crictl version
	I0120 16:16:02.622868 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:02.627143 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:16:02.671173 2170965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:16:02.671275 2170965 ssh_runner.go:195] Run: crio --version
	I0120 16:16:02.699905 2170965 ssh_runner.go:195] Run: crio --version
	I0120 16:16:02.730638 2170965 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0120 16:16:02.732045 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetIP
	I0120 16:16:02.734928 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:02.735356 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:02.735386 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:02.735603 2170965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:16:02.740180 2170965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:16:02.754255 2170965 kubeadm.go:883] updating cluster {Name:test-preload-894142 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-894142 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:16:02.754424 2170965 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 16:16:02.754484 2170965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:16:02.791846 2170965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 16:16:02.791932 2170965 ssh_runner.go:195] Run: which lz4
	I0120 16:16:02.796313 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:16:02.800775 2170965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:16:02.800809 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0120 16:16:04.471258 2170965 crio.go:462] duration metric: took 1.674987079s to copy over tarball
	I0120 16:16:04.471343 2170965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:16:06.990421 2170965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.519040654s)
	I0120 16:16:06.990463 2170965 crio.go:469] duration metric: took 2.519169978s to extract the tarball
	I0120 16:16:06.990475 2170965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:16:07.032661 2170965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:16:07.081057 2170965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 16:16:07.081105 2170965 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:16:07.081194 2170965 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:16:07.081204 2170965 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.081232 2170965 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0120 16:16:07.081248 2170965 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.081300 2170965 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.081349 2170965 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.081362 2170965 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.081436 2170965 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.082913 2170965 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:16:07.082928 2170965 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.082942 2170965 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.082945 2170965 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.082913 2170965 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.082915 2170965 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.082917 2170965 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0120 16:16:07.082921 2170965 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.277904 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0120 16:16:07.280343 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.281455 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.309990 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.310780 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.315404 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.333214 2170965 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0120 16:16:07.333303 2170965 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0120 16:16:07.333366 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.376962 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.387932 2170965 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0120 16:16:07.387984 2170965 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.388042 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.405073 2170965 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0120 16:16:07.405133 2170965 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.405194 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.443116 2170965 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0120 16:16:07.443166 2170965 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.443220 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.444405 2170965 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0120 16:16:07.444439 2170965 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.444478 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.459769 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 16:16:07.459815 2170965 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0120 16:16:07.459860 2170965 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.459906 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.489845 2170965 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0120 16:16:07.489906 2170965 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.489920 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.489959 2170965 ssh_runner.go:195] Run: which crictl
	I0120 16:16:07.490041 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.490081 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.490163 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.506657 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 16:16:07.506673 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.530237 2170965 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:16:07.638437 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.645223 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.645460 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.664331 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.664377 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.691354 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.691407 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 16:16:07.858556 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 16:16:07.858712 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:07.858769 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 16:16:07.858900 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 16:16:07.858987 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 16:16:07.876041 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 16:16:07.876052 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0120 16:16:07.876262 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0120 16:16:08.006209 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0120 16:16:08.006377 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 16:16:08.010447 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0120 16:16:08.010513 2170965 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 16:16:08.010557 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 16:16:08.010532 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0120 16:16:08.010704 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0120 16:16:08.010718 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0120 16:16:08.010731 2170965 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0120 16:16:08.010774 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0120 16:16:08.010940 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0120 16:16:08.011009 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0120 16:16:08.028136 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0120 16:16:08.028224 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0120 16:16:08.028265 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 16:16:08.028291 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0120 16:16:08.066600 2170965 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0120 16:16:08.066643 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0120 16:16:08.066756 2170965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 16:16:10.509536 2170965 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.498727194s)
	I0120 16:16:10.509621 2170965 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.49858897s)
	I0120 16:16:10.509643 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0120 16:16:10.509653 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0120 16:16:10.509662 2170965 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0120 16:16:10.509695 2170965 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.481407238s)
	I0120 16:16:10.509712 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0120 16:16:10.509724 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0120 16:16:10.509781 2170965 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.443003734s)
	I0120 16:16:10.509808 2170965 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0120 16:16:10.860468 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0120 16:16:10.860518 2170965 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 16:16:10.860578 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 16:16:11.608048 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0120 16:16:11.608085 2170965 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0120 16:16:11.608144 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0120 16:16:13.760454 2170965 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.152276277s)
	I0120 16:16:13.760496 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0120 16:16:13.760511 2170965 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 16:16:13.760565 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 16:16:14.211602 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0120 16:16:14.211644 2170965 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 16:16:14.211706 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 16:16:14.958753 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0120 16:16:14.958790 2170965 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 16:16:14.958854 2170965 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 16:16:15.813739 2170965 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0120 16:16:15.813799 2170965 cache_images.go:123] Successfully loaded all cached images
	I0120 16:16:15.813806 2170965 cache_images.go:92] duration metric: took 8.732684861s to LoadCachedImages
	I0120 16:16:15.813828 2170965 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.24.4 crio true true} ...
	I0120 16:16:15.813945 2170965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-894142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-894142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:16:15.814058 2170965 ssh_runner.go:195] Run: crio config
	I0120 16:16:15.869229 2170965 cni.go:84] Creating CNI manager for ""
	I0120 16:16:15.869254 2170965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:16:15.869267 2170965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:16:15.869301 2170965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-894142 NodeName:test-preload-894142 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:16:15.869445 2170965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-894142"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:16:15.869512 2170965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0120 16:16:15.879885 2170965 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:16:15.879987 2170965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:16:15.889784 2170965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0120 16:16:15.907884 2170965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:16:15.926348 2170965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0120 16:16:15.944918 2170965 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0120 16:16:15.949152 2170965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:16:15.962357 2170965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:16:16.083436 2170965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:16:16.101561 2170965 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142 for IP: 192.168.39.107
	I0120 16:16:16.101591 2170965 certs.go:194] generating shared ca certs ...
	I0120 16:16:16.101617 2170965 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:16:16.101809 2170965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:16:16.101873 2170965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:16:16.101887 2170965 certs.go:256] generating profile certs ...
	I0120 16:16:16.102003 2170965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/client.key
	I0120 16:16:16.102095 2170965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/apiserver.key.0ca4a3c2
	I0120 16:16:16.102162 2170965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/proxy-client.key
	I0120 16:16:16.102322 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:16:16.102380 2170965 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:16:16.102395 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:16:16.102426 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:16:16.102456 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:16:16.102488 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:16:16.102548 2170965 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:16:16.103309 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:16:16.150160 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:16:16.193179 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:16:16.227302 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:16:16.275273 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 16:16:16.309212 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:16:16.347865 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:16:16.376284 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 16:16:16.404017 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:16:16.431278 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:16:16.457540 2170965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:16:16.484711 2170965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:16:16.503309 2170965 ssh_runner.go:195] Run: openssl version
	I0120 16:16:16.510060 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:16:16.522103 2170965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:16:16.527388 2170965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:16:16.527458 2170965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:16:16.533995 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:16:16.546096 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:16:16.558307 2170965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:16:16.563821 2170965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:16:16.563890 2170965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:16:16.570557 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:16:16.582840 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:16:16.595236 2170965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:16:16.601124 2170965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:16:16.601203 2170965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:16:16.607826 2170965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:16:16.619416 2170965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:16:16.624908 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:16:16.631481 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:16:16.637815 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:16:16.644000 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:16:16.650490 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:16:16.656887 2170965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:16:16.663289 2170965 kubeadm.go:392] StartCluster: {Name:test-preload-894142 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-894142 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:16:16.663402 2170965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:16:16.663470 2170965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:16:16.701317 2170965 cri.go:89] found id: ""
	I0120 16:16:16.701416 2170965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:16:16.711723 2170965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 16:16:16.711751 2170965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 16:16:16.711820 2170965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 16:16:16.721857 2170965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 16:16:16.722382 2170965 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-894142" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:16:16.722547 2170965 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2129584/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-894142" cluster setting kubeconfig missing "test-preload-894142" context setting]
	I0120 16:16:16.722910 2170965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:16:16.723572 2170965 kapi.go:59] client config for test-preload-894142: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/client.crt", KeyFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/client.key", CAFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 16:16:16.724270 2170965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 16:16:16.734593 2170965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0120 16:16:16.734650 2170965 kubeadm.go:1160] stopping kube-system containers ...
	I0120 16:16:16.734666 2170965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 16:16:16.734726 2170965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:16:16.771480 2170965 cri.go:89] found id: ""
	I0120 16:16:16.771584 2170965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 16:16:16.787614 2170965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:16:16.797600 2170965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:16:16.797634 2170965 kubeadm.go:157] found existing configuration files:
	
	I0120 16:16:16.797686 2170965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:16:16.807086 2170965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:16:16.807173 2170965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:16:16.816912 2170965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:16:16.826319 2170965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:16:16.826401 2170965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:16:16.836583 2170965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:16:16.846186 2170965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:16:16.846256 2170965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:16:16.856269 2170965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:16:16.866030 2170965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:16:16.866107 2170965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:16:16.876014 2170965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:16:16.885989 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:16.981806 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:17.817670 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:18.094355 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:18.172228 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:18.262999 2170965 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:16:18.263107 2170965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:16:18.763281 2170965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:16:19.263891 2170965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:16:19.294322 2170965 api_server.go:72] duration metric: took 1.031317459s to wait for apiserver process to appear ...
	I0120 16:16:19.294362 2170965 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:16:19.294391 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:19.295241 2170965 api_server.go:269] stopped: https://192.168.39.107:8443/healthz: Get "https://192.168.39.107:8443/healthz": dial tcp 192.168.39.107:8443: connect: connection refused
	I0120 16:16:19.794949 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:22.857973 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 16:16:22.858020 2170965 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 16:16:22.858041 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:22.884937 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 16:16:22.884994 2170965 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 16:16:23.294564 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:23.309205 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 16:16:23.309254 2170965 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 16:16:23.794926 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:23.801763 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 16:16:23.801798 2170965 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 16:16:24.294440 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:24.303966 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0120 16:16:24.314162 2170965 api_server.go:141] control plane version: v1.24.4
	I0120 16:16:24.314194 2170965 api_server.go:131] duration metric: took 5.019824433s to wait for apiserver health ...
	I0120 16:16:24.314205 2170965 cni.go:84] Creating CNI manager for ""
	I0120 16:16:24.314212 2170965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:16:24.316323 2170965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:16:24.317864 2170965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:16:24.332762 2170965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:16:24.355880 2170965 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:16:24.356019 2170965 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0120 16:16:24.356051 2170965 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0120 16:16:24.371896 2170965 system_pods.go:59] 7 kube-system pods found
	I0120 16:16:24.371936 2170965 system_pods.go:61] "coredns-6d4b75cb6d-5tj9s" [98ee6bd7-58d3-46e6-9a3c-506d849dd51e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 16:16:24.371943 2170965 system_pods.go:61] "etcd-test-preload-894142" [a766b22a-f4ba-48d3-92be-967715e70cf7] Running
	I0120 16:16:24.371949 2170965 system_pods.go:61] "kube-apiserver-test-preload-894142" [6b7b5a26-a614-4d80-a1fb-6b32f81c97f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 16:16:24.371953 2170965 system_pods.go:61] "kube-controller-manager-test-preload-894142" [1ce4f33d-acc0-40d7-9e1f-810418f5607d] Running
	I0120 16:16:24.371961 2170965 system_pods.go:61] "kube-proxy-7xptj" [a88c0e1e-d2d2-4348-87cc-813995386769] Running
	I0120 16:16:24.371964 2170965 system_pods.go:61] "kube-scheduler-test-preload-894142" [61bc9d27-a01e-462e-a4dc-809eba5a87fc] Running
	I0120 16:16:24.371968 2170965 system_pods.go:61] "storage-provisioner" [05e3a86d-8be5-4819-836d-cc88bf009768] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 16:16:24.371974 2170965 system_pods.go:74] duration metric: took 16.068081ms to wait for pod list to return data ...
	I0120 16:16:24.371982 2170965 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:16:24.376107 2170965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:16:24.376147 2170965 node_conditions.go:123] node cpu capacity is 2
	I0120 16:16:24.376163 2170965 node_conditions.go:105] duration metric: took 4.17226ms to run NodePressure ...
	I0120 16:16:24.376188 2170965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:16:24.671121 2170965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 16:16:24.676240 2170965 kubeadm.go:739] kubelet initialised
	I0120 16:16:24.676263 2170965 kubeadm.go:740] duration metric: took 5.111887ms waiting for restarted kubelet to initialise ...
	I0120 16:16:24.676272 2170965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:16:24.681074 2170965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:24.689012 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.689043 2170965 pod_ready.go:82] duration metric: took 7.942673ms for pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:24.689052 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.689059 2170965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:24.694016 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "etcd-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.694045 2170965 pod_ready.go:82] duration metric: took 4.976221ms for pod "etcd-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:24.694055 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "etcd-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.694063 2170965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:24.698692 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "kube-apiserver-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.698718 2170965 pod_ready.go:82] duration metric: took 4.647473ms for pod "kube-apiserver-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:24.698727 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "kube-apiserver-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.698734 2170965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:24.759530 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.759561 2170965 pod_ready.go:82] duration metric: took 60.818518ms for pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:24.759571 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:24.759579 2170965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7xptj" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:25.160602 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "kube-proxy-7xptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:25.160635 2170965 pod_ready.go:82] duration metric: took 401.047489ms for pod "kube-proxy-7xptj" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:25.160646 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "kube-proxy-7xptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:25.160652 2170965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:25.560379 2170965 pod_ready.go:98] node "test-preload-894142" hosting pod "kube-scheduler-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:25.560465 2170965 pod_ready.go:82] duration metric: took 399.75861ms for pod "kube-scheduler-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	E0120 16:16:25.560484 2170965 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-894142" hosting pod "kube-scheduler-test-preload-894142" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:25.560496 2170965 pod_ready.go:39] duration metric: took 884.2139ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:16:25.560533 2170965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:16:25.573398 2170965 ops.go:34] apiserver oom_adj: -16
	I0120 16:16:25.573426 2170965 kubeadm.go:597] duration metric: took 8.861669632s to restartPrimaryControlPlane
	I0120 16:16:25.573438 2170965 kubeadm.go:394] duration metric: took 8.910160766s to StartCluster
	I0120 16:16:25.573457 2170965 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:16:25.573546 2170965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:16:25.574245 2170965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:16:25.574527 2170965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:16:25.574598 2170965 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:16:25.574706 2170965 addons.go:69] Setting storage-provisioner=true in profile "test-preload-894142"
	I0120 16:16:25.574723 2170965 addons.go:69] Setting default-storageclass=true in profile "test-preload-894142"
	I0120 16:16:25.574745 2170965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-894142"
	I0120 16:16:25.574783 2170965 config.go:182] Loaded profile config "test-preload-894142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 16:16:25.574727 2170965 addons.go:238] Setting addon storage-provisioner=true in "test-preload-894142"
	W0120 16:16:25.574849 2170965 addons.go:247] addon storage-provisioner should already be in state true
	I0120 16:16:25.574902 2170965 host.go:66] Checking if "test-preload-894142" exists ...
	I0120 16:16:25.575216 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:16:25.575234 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:16:25.575263 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:16:25.575276 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:16:25.577524 2170965 out.go:177] * Verifying Kubernetes components...
	I0120 16:16:25.578930 2170965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:16:25.591267 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0120 16:16:25.591489 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0120 16:16:25.591846 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:16:25.591994 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:16:25.592381 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:16:25.592409 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:16:25.592487 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:16:25.592508 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:16:25.592761 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:16:25.592818 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:16:25.592959 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetState
	I0120 16:16:25.593421 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:16:25.593477 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:16:25.595538 2170965 kapi.go:59] client config for test-preload-894142: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/client.crt", KeyFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/test-preload-894142/client.key", CAFile:"/home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 16:16:25.595937 2170965 addons.go:238] Setting addon default-storageclass=true in "test-preload-894142"
	W0120 16:16:25.595961 2170965 addons.go:247] addon default-storageclass should already be in state true
	I0120 16:16:25.595998 2170965 host.go:66] Checking if "test-preload-894142" exists ...
	I0120 16:16:25.596400 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:16:25.596453 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:16:25.610352 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0120 16:16:25.611006 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:16:25.611639 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:16:25.611668 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:16:25.612065 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:16:25.612117 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0120 16:16:25.612306 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetState
	I0120 16:16:25.612597 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:16:25.613142 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:16:25.613162 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:16:25.613501 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:16:25.614076 2170965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:16:25.614127 2170965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:16:25.614420 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:25.616566 2170965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:16:25.618048 2170965 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:16:25.618068 2170965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:16:25.618093 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:25.621552 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:25.622044 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:25.622079 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:25.622268 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:25.622464 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:25.622632 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:25.622773 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:25.646332 2170965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34909
	I0120 16:16:25.646894 2170965 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:16:25.647638 2170965 main.go:141] libmachine: Using API Version  1
	I0120 16:16:25.647667 2170965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:16:25.648178 2170965 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:16:25.648394 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetState
	I0120 16:16:25.650168 2170965 main.go:141] libmachine: (test-preload-894142) Calling .DriverName
	I0120 16:16:25.650418 2170965 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:16:25.650440 2170965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:16:25.650465 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHHostname
	I0120 16:16:25.653594 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:25.653998 2170965 main.go:141] libmachine: (test-preload-894142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:44:11", ip: ""} in network mk-test-preload-894142: {Iface:virbr1 ExpiryTime:2025-01-20 17:15:52 +0000 UTC Type:0 Mac:52:54:00:a6:44:11 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-894142 Clientid:01:52:54:00:a6:44:11}
	I0120 16:16:25.654028 2170965 main.go:141] libmachine: (test-preload-894142) DBG | domain test-preload-894142 has defined IP address 192.168.39.107 and MAC address 52:54:00:a6:44:11 in network mk-test-preload-894142
	I0120 16:16:25.654177 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHPort
	I0120 16:16:25.654347 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHKeyPath
	I0120 16:16:25.654527 2170965 main.go:141] libmachine: (test-preload-894142) Calling .GetSSHUsername
	I0120 16:16:25.654686 2170965 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/test-preload-894142/id_rsa Username:docker}
	I0120 16:16:25.781060 2170965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:16:25.808324 2170965 node_ready.go:35] waiting up to 6m0s for node "test-preload-894142" to be "Ready" ...
	I0120 16:16:25.858944 2170965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:16:25.876432 2170965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:16:26.889685 2170965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.013201355s)
	I0120 16:16:26.889760 2170965 main.go:141] libmachine: Making call to close driver server
	I0120 16:16:26.889782 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Close
	I0120 16:16:26.889890 2170965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030899559s)
	I0120 16:16:26.889939 2170965 main.go:141] libmachine: Making call to close driver server
	I0120 16:16:26.889950 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Close
	I0120 16:16:26.890115 2170965 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:16:26.890148 2170965 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:16:26.890158 2170965 main.go:141] libmachine: Making call to close driver server
	I0120 16:16:26.890166 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Close
	I0120 16:16:26.890283 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Closing plugin on server side
	I0120 16:16:26.890291 2170965 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:16:26.890351 2170965 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:16:26.890365 2170965 main.go:141] libmachine: Making call to close driver server
	I0120 16:16:26.890376 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Close
	I0120 16:16:26.890458 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Closing plugin on server side
	I0120 16:16:26.890469 2170965 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:16:26.890483 2170965 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:16:26.890617 2170965 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:16:26.890630 2170965 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:16:26.900693 2170965 main.go:141] libmachine: Making call to close driver server
	I0120 16:16:26.900719 2170965 main.go:141] libmachine: (test-preload-894142) Calling .Close
	I0120 16:16:26.901010 2170965 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:16:26.901035 2170965 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:16:26.901049 2170965 main.go:141] libmachine: (test-preload-894142) DBG | Closing plugin on server side
	I0120 16:16:26.903120 2170965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:16:26.904618 2170965 addons.go:514] duration metric: took 1.330029112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:16:27.812271 2170965 node_ready.go:53] node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:29.812907 2170965 node_ready.go:53] node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:31.813067 2170965 node_ready.go:53] node "test-preload-894142" has status "Ready":"False"
	I0120 16:16:33.312931 2170965 node_ready.go:49] node "test-preload-894142" has status "Ready":"True"
	I0120 16:16:33.312968 2170965 node_ready.go:38] duration metric: took 7.504597645s for node "test-preload-894142" to be "Ready" ...
	I0120 16:16:33.312992 2170965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:16:33.318952 2170965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.324064 2170965 pod_ready.go:93] pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:33.324097 2170965 pod_ready.go:82] duration metric: took 5.117274ms for pod "coredns-6d4b75cb6d-5tj9s" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.324111 2170965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.329461 2170965 pod_ready.go:93] pod "etcd-test-preload-894142" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:33.329496 2170965 pod_ready.go:82] duration metric: took 5.375943ms for pod "etcd-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.329510 2170965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.335867 2170965 pod_ready.go:93] pod "kube-apiserver-test-preload-894142" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:33.335897 2170965 pod_ready.go:82] duration metric: took 6.377559ms for pod "kube-apiserver-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:33.335917 2170965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:35.341477 2170965 pod_ready.go:103] pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace has status "Ready":"False"
	I0120 16:16:37.342169 2170965 pod_ready.go:103] pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace has status "Ready":"False"
	I0120 16:16:38.842879 2170965 pod_ready.go:93] pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:38.842905 2170965 pod_ready.go:82] duration metric: took 5.506979299s for pod "kube-controller-manager-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:38.842915 2170965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7xptj" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:38.847739 2170965 pod_ready.go:93] pod "kube-proxy-7xptj" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:38.847764 2170965 pod_ready.go:82] duration metric: took 4.842835ms for pod "kube-proxy-7xptj" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:38.847772 2170965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:38.854048 2170965 pod_ready.go:93] pod "kube-scheduler-test-preload-894142" in "kube-system" namespace has status "Ready":"True"
	I0120 16:16:38.854076 2170965 pod_ready.go:82] duration metric: took 6.297394ms for pod "kube-scheduler-test-preload-894142" in "kube-system" namespace to be "Ready" ...
	I0120 16:16:38.854086 2170965 pod_ready.go:39] duration metric: took 5.541080644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:16:38.854101 2170965 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:16:38.854151 2170965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:16:38.869186 2170965 api_server.go:72] duration metric: took 13.294621372s to wait for apiserver process to appear ...
	I0120 16:16:38.869223 2170965 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:16:38.869248 2170965 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0120 16:16:38.875639 2170965 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0120 16:16:38.877373 2170965 api_server.go:141] control plane version: v1.24.4
	I0120 16:16:38.877405 2170965 api_server.go:131] duration metric: took 8.171684ms to wait for apiserver health ...
	I0120 16:16:38.877418 2170965 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:16:38.883117 2170965 system_pods.go:59] 7 kube-system pods found
	I0120 16:16:38.883149 2170965 system_pods.go:61] "coredns-6d4b75cb6d-5tj9s" [98ee6bd7-58d3-46e6-9a3c-506d849dd51e] Running
	I0120 16:16:38.883156 2170965 system_pods.go:61] "etcd-test-preload-894142" [a766b22a-f4ba-48d3-92be-967715e70cf7] Running
	I0120 16:16:38.883162 2170965 system_pods.go:61] "kube-apiserver-test-preload-894142" [6b7b5a26-a614-4d80-a1fb-6b32f81c97f1] Running
	I0120 16:16:38.883168 2170965 system_pods.go:61] "kube-controller-manager-test-preload-894142" [1ce4f33d-acc0-40d7-9e1f-810418f5607d] Running
	I0120 16:16:38.883172 2170965 system_pods.go:61] "kube-proxy-7xptj" [a88c0e1e-d2d2-4348-87cc-813995386769] Running
	I0120 16:16:38.883177 2170965 system_pods.go:61] "kube-scheduler-test-preload-894142" [61bc9d27-a01e-462e-a4dc-809eba5a87fc] Running
	I0120 16:16:38.883181 2170965 system_pods.go:61] "storage-provisioner" [05e3a86d-8be5-4819-836d-cc88bf009768] Running
	I0120 16:16:38.883188 2170965 system_pods.go:74] duration metric: took 5.763605ms to wait for pod list to return data ...
	I0120 16:16:38.883197 2170965 default_sa.go:34] waiting for default service account to be created ...
	I0120 16:16:38.913780 2170965 default_sa.go:45] found service account: "default"
	I0120 16:16:38.913813 2170965 default_sa.go:55] duration metric: took 30.608218ms for default service account to be created ...
	I0120 16:16:38.913827 2170965 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 16:16:39.116099 2170965 system_pods.go:87] 7 kube-system pods found
	I0120 16:16:39.315105 2170965 system_pods.go:105] "coredns-6d4b75cb6d-5tj9s" [98ee6bd7-58d3-46e6-9a3c-506d849dd51e] Running
	I0120 16:16:39.315142 2170965 system_pods.go:105] "etcd-test-preload-894142" [a766b22a-f4ba-48d3-92be-967715e70cf7] Running
	I0120 16:16:39.315148 2170965 system_pods.go:105] "kube-apiserver-test-preload-894142" [6b7b5a26-a614-4d80-a1fb-6b32f81c97f1] Running
	I0120 16:16:39.315154 2170965 system_pods.go:105] "kube-controller-manager-test-preload-894142" [1ce4f33d-acc0-40d7-9e1f-810418f5607d] Running
	I0120 16:16:39.315159 2170965 system_pods.go:105] "kube-proxy-7xptj" [a88c0e1e-d2d2-4348-87cc-813995386769] Running
	I0120 16:16:39.315163 2170965 system_pods.go:105] "kube-scheduler-test-preload-894142" [61bc9d27-a01e-462e-a4dc-809eba5a87fc] Running
	I0120 16:16:39.315168 2170965 system_pods.go:105] "storage-provisioner" [05e3a86d-8be5-4819-836d-cc88bf009768] Running
	I0120 16:16:39.315177 2170965 system_pods.go:147] duration metric: took 401.342161ms to wait for k8s-apps to be running ...
	I0120 16:16:39.315186 2170965 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 16:16:39.315240 2170965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:16:39.334517 2170965 system_svc.go:56] duration metric: took 19.318265ms WaitForService to wait for kubelet
	I0120 16:16:39.334556 2170965 kubeadm.go:582] duration metric: took 13.760000901s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:16:39.334576 2170965 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:16:39.514241 2170965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:16:39.514271 2170965 node_conditions.go:123] node cpu capacity is 2
	I0120 16:16:39.514285 2170965 node_conditions.go:105] duration metric: took 179.703529ms to run NodePressure ...
	I0120 16:16:39.514304 2170965 start.go:241] waiting for startup goroutines ...
	I0120 16:16:39.514313 2170965 start.go:246] waiting for cluster config update ...
	I0120 16:16:39.514325 2170965 start.go:255] writing updated cluster config ...
	I0120 16:16:39.514633 2170965 ssh_runner.go:195] Run: rm -f paused
	I0120 16:16:39.565273 2170965 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0120 16:16:39.567357 2170965 out.go:201] 
	W0120 16:16:39.568804 2170965 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0120 16:16:39.569991 2170965 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0120 16:16:39.571264 2170965 out.go:177] * Done! kubectl is now configured to use "test-preload-894142" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.502310031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737389800502286664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f851e10-0906-4591-9abd-02f701b181df name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.503036213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aa9986a-45bf-4193-9672-b8f31004cafe name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.503135720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aa9986a-45bf-4193-9672-b8f31004cafe name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.506148121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5bd2e1b3a5925e18a9b40f2e161ffb02960139c1b41aedbe307cee5bbb678c0,PodSandboxId:f5af741bd54d3992fc7a5a90c89b969c39b7585681a948d6876326579261213f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737389791405193483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tj9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ee6bd7-58d3-46e6-9a3c-506d849dd51e,},Annotations:map[string]string{io.kubernetes.container.hash: 25ee5a82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea64ffb4a4493a94ebcb0069dfd3f899cae9d958aa0236cd13cc639ab7162a83,PodSandboxId:898fcc382b3579cf805c6217c6f0cca8b16f86e7545653d265b904cf38c8a21b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737389784097737970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 05e3a86d-8be5-4819-836d-cc88bf009768,},Annotations:map[string]string{io.kubernetes.container.hash: a1467e20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1943874941cd89af1ab639f60230565781e73048dd1b74a81922acc28eb55da7,PodSandboxId:c490e022353f4ed310de2371768748ba445eefdd1576d2844fe2591935dc841a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737389784038287424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
8c0e1e-d2d2-4348-87cc-813995386769,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab80b4cb9c8260abea6414042db9202ad62170d98a0e2fa4672a357c92d5060d,PodSandboxId:9e2120e4e96514beb97e03f6dd4beb86f3617c3b5ed9831f14247774324e31d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737389779017808562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ab5a9b0b5b5a80897bba3c7412e72e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9a4a7785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5049e851f0c358ba30650bdba2b5f5ed50c8f48e90930b08dc92b260a6149fe,PodSandboxId:be4ff80ee67969956235d510a9a41fd6204f13248a893afa0edeebd5c2e01774,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737389779004010546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a9763bebbb15726b625503d5849b8,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc0d61d7b1ee0ce3c6f50c3320bdf9604e2280f5191f8eb6693d8511f48b118,PodSandboxId:b6277f8e804a9e3d163d6d43d6fb2d643cd7ce4a71fb4f4a8dbb84d51777c4c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737389778967401652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cea4bdb157ce8de0b87ab29dcec04bb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb2784a3a20283b86365725d0f22ed2fa34f8f6ecd4c906ab63f4b037fe34a,PodSandboxId:6c6c0b0fba89fd80eb92266822f1c78ca3608c64861b13a0d43a976b88d92c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737389778971761929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68fd3ad525e725c669b9786c92a97039,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aa9986a-45bf-4193-9672-b8f31004cafe name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.552307901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56e9f42b-3a0c-4323-8cb5-f0f53d7f97e3 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.552405627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56e9f42b-3a0c-4323-8cb5-f0f53d7f97e3 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.553473976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a856df87-ba84-401b-9d58-d3b4320defe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.553885673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737389800553858082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a856df87-ba84-401b-9d58-d3b4320defe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.554795216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c02fd84-4cd9-4fc5-b45b-056e824cfe2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.554848771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c02fd84-4cd9-4fc5-b45b-056e824cfe2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.555016854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5bd2e1b3a5925e18a9b40f2e161ffb02960139c1b41aedbe307cee5bbb678c0,PodSandboxId:f5af741bd54d3992fc7a5a90c89b969c39b7585681a948d6876326579261213f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737389791405193483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tj9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ee6bd7-58d3-46e6-9a3c-506d849dd51e,},Annotations:map[string]string{io.kubernetes.container.hash: 25ee5a82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea64ffb4a4493a94ebcb0069dfd3f899cae9d958aa0236cd13cc639ab7162a83,PodSandboxId:898fcc382b3579cf805c6217c6f0cca8b16f86e7545653d265b904cf38c8a21b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737389784097737970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 05e3a86d-8be5-4819-836d-cc88bf009768,},Annotations:map[string]string{io.kubernetes.container.hash: a1467e20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1943874941cd89af1ab639f60230565781e73048dd1b74a81922acc28eb55da7,PodSandboxId:c490e022353f4ed310de2371768748ba445eefdd1576d2844fe2591935dc841a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737389784038287424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
8c0e1e-d2d2-4348-87cc-813995386769,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab80b4cb9c8260abea6414042db9202ad62170d98a0e2fa4672a357c92d5060d,PodSandboxId:9e2120e4e96514beb97e03f6dd4beb86f3617c3b5ed9831f14247774324e31d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737389779017808562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ab5a9b0b5b5a80897bba3c7412e72e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9a4a7785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5049e851f0c358ba30650bdba2b5f5ed50c8f48e90930b08dc92b260a6149fe,PodSandboxId:be4ff80ee67969956235d510a9a41fd6204f13248a893afa0edeebd5c2e01774,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737389779004010546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a9763bebbb15726b625503d5849b8,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc0d61d7b1ee0ce3c6f50c3320bdf9604e2280f5191f8eb6693d8511f48b118,PodSandboxId:b6277f8e804a9e3d163d6d43d6fb2d643cd7ce4a71fb4f4a8dbb84d51777c4c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737389778967401652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cea4bdb157ce8de0b87ab29dcec04bb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb2784a3a20283b86365725d0f22ed2fa34f8f6ecd4c906ab63f4b037fe34a,PodSandboxId:6c6c0b0fba89fd80eb92266822f1c78ca3608c64861b13a0d43a976b88d92c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737389778971761929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68fd3ad525e725c669b9786c92a97039,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c02fd84-4cd9-4fc5-b45b-056e824cfe2d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.595670774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa86d88a-9952-4634-8325-64a115d23b19 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.595740467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa86d88a-9952-4634-8325-64a115d23b19 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.597206703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6463fcc-cbc0-4d13-8e32-f1e3e523ff21 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.597626374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737389800597599423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6463fcc-cbc0-4d13-8e32-f1e3e523ff21 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.598324550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=162aaadd-cb72-4dfc-9485-e2c22f5531b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.598376187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=162aaadd-cb72-4dfc-9485-e2c22f5531b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.598586060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5bd2e1b3a5925e18a9b40f2e161ffb02960139c1b41aedbe307cee5bbb678c0,PodSandboxId:f5af741bd54d3992fc7a5a90c89b969c39b7585681a948d6876326579261213f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737389791405193483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tj9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ee6bd7-58d3-46e6-9a3c-506d849dd51e,},Annotations:map[string]string{io.kubernetes.container.hash: 25ee5a82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea64ffb4a4493a94ebcb0069dfd3f899cae9d958aa0236cd13cc639ab7162a83,PodSandboxId:898fcc382b3579cf805c6217c6f0cca8b16f86e7545653d265b904cf38c8a21b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737389784097737970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 05e3a86d-8be5-4819-836d-cc88bf009768,},Annotations:map[string]string{io.kubernetes.container.hash: a1467e20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1943874941cd89af1ab639f60230565781e73048dd1b74a81922acc28eb55da7,PodSandboxId:c490e022353f4ed310de2371768748ba445eefdd1576d2844fe2591935dc841a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737389784038287424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
8c0e1e-d2d2-4348-87cc-813995386769,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab80b4cb9c8260abea6414042db9202ad62170d98a0e2fa4672a357c92d5060d,PodSandboxId:9e2120e4e96514beb97e03f6dd4beb86f3617c3b5ed9831f14247774324e31d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737389779017808562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ab5a9b0b5b5a80897bba3c7412e72e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9a4a7785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5049e851f0c358ba30650bdba2b5f5ed50c8f48e90930b08dc92b260a6149fe,PodSandboxId:be4ff80ee67969956235d510a9a41fd6204f13248a893afa0edeebd5c2e01774,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737389779004010546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a9763bebbb15726b625503d5849b8,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc0d61d7b1ee0ce3c6f50c3320bdf9604e2280f5191f8eb6693d8511f48b118,PodSandboxId:b6277f8e804a9e3d163d6d43d6fb2d643cd7ce4a71fb4f4a8dbb84d51777c4c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737389778967401652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cea4bdb157ce8de0b87ab29dcec04bb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb2784a3a20283b86365725d0f22ed2fa34f8f6ecd4c906ab63f4b037fe34a,PodSandboxId:6c6c0b0fba89fd80eb92266822f1c78ca3608c64861b13a0d43a976b88d92c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737389778971761929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68fd3ad525e725c669b9786c92a97039,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=162aaadd-cb72-4dfc-9485-e2c22f5531b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.634683192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=790c2b51-4ace-40c2-8c0a-f030ff2b0e81 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.634780202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=790c2b51-4ace-40c2-8c0a-f030ff2b0e81 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.635875063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71616b43-2532-4e01-9326-e18f6f641d7c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.636456043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737389800636430701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71616b43-2532-4e01-9326-e18f6f641d7c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.637033359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd936b0e-e184-4811-b70a-05cf0e4b04e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.637148541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd936b0e-e184-4811-b70a-05cf0e4b04e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:16:40 test-preload-894142 crio[670]: time="2025-01-20 16:16:40.637315604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5bd2e1b3a5925e18a9b40f2e161ffb02960139c1b41aedbe307cee5bbb678c0,PodSandboxId:f5af741bd54d3992fc7a5a90c89b969c39b7585681a948d6876326579261213f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737389791405193483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tj9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ee6bd7-58d3-46e6-9a3c-506d849dd51e,},Annotations:map[string]string{io.kubernetes.container.hash: 25ee5a82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea64ffb4a4493a94ebcb0069dfd3f899cae9d958aa0236cd13cc639ab7162a83,PodSandboxId:898fcc382b3579cf805c6217c6f0cca8b16f86e7545653d265b904cf38c8a21b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737389784097737970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 05e3a86d-8be5-4819-836d-cc88bf009768,},Annotations:map[string]string{io.kubernetes.container.hash: a1467e20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1943874941cd89af1ab639f60230565781e73048dd1b74a81922acc28eb55da7,PodSandboxId:c490e022353f4ed310de2371768748ba445eefdd1576d2844fe2591935dc841a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737389784038287424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
8c0e1e-d2d2-4348-87cc-813995386769,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab80b4cb9c8260abea6414042db9202ad62170d98a0e2fa4672a357c92d5060d,PodSandboxId:9e2120e4e96514beb97e03f6dd4beb86f3617c3b5ed9831f14247774324e31d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737389779017808562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ab5a9b0b5b5a80897bba3c7412e72e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9a4a7785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5049e851f0c358ba30650bdba2b5f5ed50c8f48e90930b08dc92b260a6149fe,PodSandboxId:be4ff80ee67969956235d510a9a41fd6204f13248a893afa0edeebd5c2e01774,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737389779004010546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a9763bebbb15726b625503d5849b8,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc0d61d7b1ee0ce3c6f50c3320bdf9604e2280f5191f8eb6693d8511f48b118,PodSandboxId:b6277f8e804a9e3d163d6d43d6fb2d643cd7ce4a71fb4f4a8dbb84d51777c4c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737389778967401652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cea4bdb157ce8de0b87ab29dcec04bb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb2784a3a20283b86365725d0f22ed2fa34f8f6ecd4c906ab63f4b037fe34a,PodSandboxId:6c6c0b0fba89fd80eb92266822f1c78ca3608c64861b13a0d43a976b88d92c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737389778971761929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-894142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68fd3ad525e725c669b9786c92a97039,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd936b0e-e184-4811-b70a-05cf0e4b04e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e5bd2e1b3a592       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   f5af741bd54d3       coredns-6d4b75cb6d-5tj9s
	ea64ffb4a4493       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   898fcc382b357       storage-provisioner
	1943874941cd8       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   c490e022353f4       kube-proxy-7xptj
	ab80b4cb9c826       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   9e2120e4e9651       etcd-test-preload-894142
	a5049e851f0c3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   be4ff80ee6796       kube-scheduler-test-preload-894142
	f3bb2784a3a20       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   6c6c0b0fba89f       kube-controller-manager-test-preload-894142
	7fc0d61d7b1ee       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   b6277f8e804a9       kube-apiserver-test-preload-894142
	
	
	==> coredns [e5bd2e1b3a5925e18a9b40f2e161ffb02960139c1b41aedbe307cee5bbb678c0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:45779 - 54562 "HINFO IN 8787980216622316982.9030484710994474797. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029640019s
	
	
	==> describe nodes <==
	Name:               test-preload-894142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-894142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=test-preload-894142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T16_13_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 16:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-894142
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 16:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 16:16:33 +0000   Mon, 20 Jan 2025 16:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 16:16:33 +0000   Mon, 20 Jan 2025 16:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 16:16:33 +0000   Mon, 20 Jan 2025 16:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 16:16:33 +0000   Mon, 20 Jan 2025 16:16:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    test-preload-894142
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13c70c022cf94cf6b7c0df48dd1a55c7
	  System UUID:                13c70c02-2cf9-4cf6-b7c0-df48dd1a55c7
	  Boot ID:                    36922753-4a67-4512-a79d-c5caa607e0b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5tj9s                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m18s
	  kube-system                 etcd-test-preload-894142                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m31s
	  kube-system                 kube-apiserver-test-preload-894142             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-controller-manager-test-preload-894142    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-proxy-7xptj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 kube-scheduler-test-preload-894142             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x4 over 3m39s)  kubelet          Node test-preload-894142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m39s (x3 over 3m39s)  kubelet          Node test-preload-894142 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    3m39s (x4 over 3m39s)  kubelet          Node test-preload-894142 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  3m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m31s                  kubelet          Node test-preload-894142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m31s                  kubelet          Node test-preload-894142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m31s                  kubelet          Node test-preload-894142 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m21s                  kubelet          Node test-preload-894142 status is now: NodeReady
	  Normal  RegisteredNode           3m19s                  node-controller  Node test-preload-894142 event: Registered Node test-preload-894142 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)      kubelet          Node test-preload-894142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)      kubelet          Node test-preload-894142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)      kubelet          Node test-preload-894142 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                     node-controller  Node test-preload-894142 event: Registered Node test-preload-894142 in Controller
	
	
	==> dmesg <==
	[Jan20 16:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053388] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.860339] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.456868] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan20 16:16] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057073] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.171582] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.151538] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.287356] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[ +13.557933] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.065950] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.928671] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +5.649440] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.987662] systemd-fstab-generator[1772]: Ignoring "noauto" option for root device
	[  +5.541841] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [ab80b4cb9c8260abea6414042db9202ad62170d98a0e2fa4672a357c92d5060d] <==
	{"level":"info","ts":"2025-01-20T16:16:19.403Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ec1614c5c0f7335e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-20T16:16:19.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e switched to configuration voters=(17011807482017166174)"}
	{"level":"info","ts":"2025-01-20T16:16:19.405Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2025-01-20T16:16:19.405Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T16:16:19.406Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T16:16:19.409Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ec1614c5c0f7335e","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-01-20T16:16:19.413Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-20T16:16:19.414Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2025-01-20T16:16:19.414Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2025-01-20T16:16:19.413Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-20T16:16:19.414Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2025-01-20T16:16:20.251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2025-01-20T16:16:20.252Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:test-preload-894142 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T16:16:20.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T16:16:20.258Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T16:16:20.259Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2025-01-20T16:16:20.270Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T16:16:20.270Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T16:16:20.289Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 16:16:40 up 0 min,  0 users,  load average: 1.26, 0.36, 0.13
	Linux test-preload-894142 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7fc0d61d7b1ee0ce3c6f50c3320bdf9604e2280f5191f8eb6693d8511f48b118] <==
	I0120 16:16:22.833525       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0120 16:16:22.833561       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0120 16:16:22.844898       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0120 16:16:22.844930       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0120 16:16:22.845342       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0120 16:16:22.860461       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0120 16:16:22.941119       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0120 16:16:22.945316       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0120 16:16:22.973446       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0120 16:16:22.997694       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0120 16:16:23.004729       1 cache.go:39] Caches are synced for autoregister controller
	I0120 16:16:23.005270       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0120 16:16:23.006795       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0120 16:16:23.009296       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0120 16:16:23.037139       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 16:16:23.491897       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 16:16:23.808677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0120 16:16:24.541925       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0120 16:16:24.551208       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0120 16:16:24.567413       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0120 16:16:24.616328       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0120 16:16:24.635760       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 16:16:24.648914       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0120 16:16:35.636243       1 controller.go:611] quota admission added evaluator for: endpoints
	I0120 16:16:35.825513       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f3bb2784a3a20283b86365725d0f22ed2fa34f8f6ecd4c906ab63f4b037fe34a] <==
	I0120 16:16:35.652160       1 disruption.go:371] Sending events to api server.
	I0120 16:16:35.691437       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 16:16:35.692650       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 16:16:35.701310       1 shared_informer.go:262] Caches are synced for expand
	I0120 16:16:35.709889       1 shared_informer.go:262] Caches are synced for PV protection
	W0120 16:16:35.736188       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-894142" does not exist
	I0120 16:16:35.745732       1 shared_informer.go:262] Caches are synced for node
	I0120 16:16:35.745893       1 range_allocator.go:173] Starting range CIDR allocator
	I0120 16:16:35.745906       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0120 16:16:35.745918       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0120 16:16:35.771163       1 shared_informer.go:262] Caches are synced for TTL
	I0120 16:16:35.780429       1 shared_informer.go:262] Caches are synced for persistent volume
	I0120 16:16:35.796929       1 shared_informer.go:262] Caches are synced for daemon sets
	I0120 16:16:35.804369       1 shared_informer.go:262] Caches are synced for attach detach
	I0120 16:16:35.810872       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0120 16:16:35.820310       1 shared_informer.go:262] Caches are synced for GC
	I0120 16:16:35.827035       1 shared_informer.go:262] Caches are synced for taint
	I0120 16:16:35.827197       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0120 16:16:35.827619       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0120 16:16:35.827840       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-894142. Assuming now as a timestamp.
	I0120 16:16:35.827875       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0120 16:16:35.828204       1 event.go:294] "Event occurred" object="test-preload-894142" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-894142 event: Registered Node test-preload-894142 in Controller"
	I0120 16:16:36.197781       1 shared_informer.go:262] Caches are synced for garbage collector
	I0120 16:16:36.197834       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 16:16:36.235240       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [1943874941cd89af1ab639f60230565781e73048dd1b74a81922acc28eb55da7] <==
	I0120 16:16:24.457748       1 node.go:163] Successfully retrieved node IP: 192.168.39.107
	I0120 16:16:24.458015       1 server_others.go:138] "Detected node IP" address="192.168.39.107"
	I0120 16:16:24.458161       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0120 16:16:24.519202       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0120 16:16:24.519219       1 server_others.go:206] "Using iptables Proxier"
	I0120 16:16:24.520262       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0120 16:16:24.521391       1 server.go:661] "Version info" version="v1.24.4"
	I0120 16:16:24.523235       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 16:16:24.526683       1 config.go:317] "Starting service config controller"
	I0120 16:16:24.526727       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0120 16:16:24.526752       1 config.go:226] "Starting endpoint slice config controller"
	I0120 16:16:24.526773       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0120 16:16:24.533664       1 config.go:444] "Starting node config controller"
	I0120 16:16:24.533715       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0120 16:16:24.627426       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0120 16:16:24.627641       1 shared_informer.go:262] Caches are synced for service config
	I0120 16:16:24.634556       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a5049e851f0c358ba30650bdba2b5f5ed50c8f48e90930b08dc92b260a6149fe] <==
	I0120 16:16:20.479720       1 serving.go:348] Generated self-signed cert in-memory
	W0120 16:16:22.877348       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 16:16:22.877448       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 16:16:22.877469       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 16:16:22.877483       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 16:16:22.929469       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0120 16:16:22.929513       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 16:16:22.935282       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0120 16:16:22.935378       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 16:16:22.935491       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0120 16:16:22.937618       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0120 16:16:23.037755       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.245626    1128 apiserver.go:52] "Watching apiserver"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.252021    1128 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.252204    1128 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.252249    1128 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.253930    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-5tj9s" podUID=98ee6bd7-58d3-46e6-9a3c-506d849dd51e
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.301838    1128 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.309468    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8x5\" (UniqueName: \"kubernetes.io/projected/a88c0e1e-d2d2-4348-87cc-813995386769-kube-api-access-8c8x5\") pod \"kube-proxy-7xptj\" (UID: \"a88c0e1e-d2d2-4348-87cc-813995386769\") " pod="kube-system/kube-proxy-7xptj"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.309863    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a88c0e1e-d2d2-4348-87cc-813995386769-xtables-lock\") pod \"kube-proxy-7xptj\" (UID: \"a88c0e1e-d2d2-4348-87cc-813995386769\") " pod="kube-system/kube-proxy-7xptj"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.310169    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqgq\" (UniqueName: \"kubernetes.io/projected/05e3a86d-8be5-4819-836d-cc88bf009768-kube-api-access-hzqgq\") pod \"storage-provisioner\" (UID: \"05e3a86d-8be5-4819-836d-cc88bf009768\") " pod="kube-system/storage-provisioner"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.310441    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a88c0e1e-d2d2-4348-87cc-813995386769-kube-proxy\") pod \"kube-proxy-7xptj\" (UID: \"a88c0e1e-d2d2-4348-87cc-813995386769\") " pod="kube-system/kube-proxy-7xptj"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.310669    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a88c0e1e-d2d2-4348-87cc-813995386769-lib-modules\") pod \"kube-proxy-7xptj\" (UID: \"a88c0e1e-d2d2-4348-87cc-813995386769\") " pod="kube-system/kube-proxy-7xptj"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.310802    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/05e3a86d-8be5-4819-836d-cc88bf009768-tmp\") pod \"storage-provisioner\" (UID: \"05e3a86d-8be5-4819-836d-cc88bf009768\") " pod="kube-system/storage-provisioner"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.310948    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume\") pod \"coredns-6d4b75cb6d-5tj9s\" (UID: \"98ee6bd7-58d3-46e6-9a3c-506d849dd51e\") " pod="kube-system/coredns-6d4b75cb6d-5tj9s"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.311171    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2twlv\" (UniqueName: \"kubernetes.io/projected/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-kube-api-access-2twlv\") pod \"coredns-6d4b75cb6d-5tj9s\" (UID: \"98ee6bd7-58d3-46e6-9a3c-506d849dd51e\") " pod="kube-system/coredns-6d4b75cb6d-5tj9s"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: I0120 16:16:23.311331    1128 reconciler.go:159] "Reconciler: start to sync state"
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.413974    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.414409    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume podName:98ee6bd7-58d3-46e6-9a3c-506d849dd51e nodeName:}" failed. No retries permitted until 2025-01-20 16:16:23.914377124 +0000 UTC m=+5.826555940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume") pod "coredns-6d4b75cb6d-5tj9s" (UID: "98ee6bd7-58d3-46e6-9a3c-506d849dd51e") : object "kube-system"/"coredns" not registered
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.916853    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 16:16:23 test-preload-894142 kubelet[1128]: E0120 16:16:23.916916    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume podName:98ee6bd7-58d3-46e6-9a3c-506d849dd51e nodeName:}" failed. No retries permitted until 2025-01-20 16:16:24.916902377 +0000 UTC m=+6.829081183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume") pod "coredns-6d4b75cb6d-5tj9s" (UID: "98ee6bd7-58d3-46e6-9a3c-506d849dd51e") : object "kube-system"/"coredns" not registered
	Jan 20 16:16:24 test-preload-894142 kubelet[1128]: E0120 16:16:24.928746    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 16:16:24 test-preload-894142 kubelet[1128]: E0120 16:16:24.928842    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume podName:98ee6bd7-58d3-46e6-9a3c-506d849dd51e nodeName:}" failed. No retries permitted until 2025-01-20 16:16:26.928826446 +0000 UTC m=+8.841005263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume") pod "coredns-6d4b75cb6d-5tj9s" (UID: "98ee6bd7-58d3-46e6-9a3c-506d849dd51e") : object "kube-system"/"coredns" not registered
	Jan 20 16:16:25 test-preload-894142 kubelet[1128]: E0120 16:16:25.353595    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-5tj9s" podUID=98ee6bd7-58d3-46e6-9a3c-506d849dd51e
	Jan 20 16:16:26 test-preload-894142 kubelet[1128]: E0120 16:16:26.949377    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 16:16:26 test-preload-894142 kubelet[1128]: E0120 16:16:26.949537    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume podName:98ee6bd7-58d3-46e6-9a3c-506d849dd51e nodeName:}" failed. No retries permitted until 2025-01-20 16:16:30.949495587 +0000 UTC m=+12.861674409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98ee6bd7-58d3-46e6-9a3c-506d849dd51e-config-volume") pod "coredns-6d4b75cb6d-5tj9s" (UID: "98ee6bd7-58d3-46e6-9a3c-506d849dd51e") : object "kube-system"/"coredns" not registered
	Jan 20 16:16:27 test-preload-894142 kubelet[1128]: E0120 16:16:27.356428    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-5tj9s" podUID=98ee6bd7-58d3-46e6-9a3c-506d849dd51e
	
	
	==> storage-provisioner [ea64ffb4a4493a94ebcb0069dfd3f899cae9d958aa0236cd13cc639ab7162a83] <==
	I0120 16:16:24.244365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-894142 -n test-preload-894142
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-894142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-894142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-894142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-894142: (1.195366592s)
--- FAIL: TestPreload (287.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (445.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0120 16:19:45.663963 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m17.148814208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-207056] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-207056" primary control-plane node in "kubernetes-upgrade-207056" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:19:36.652668 2173708 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:19:36.652831 2173708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:19:36.652845 2173708 out.go:358] Setting ErrFile to fd 2...
	I0120 16:19:36.652853 2173708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:19:36.653207 2173708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:19:36.654059 2173708 out.go:352] Setting JSON to false
	I0120 16:19:36.655453 2173708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":28923,"bootTime":1737361054,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:19:36.655612 2173708 start.go:139] virtualization: kvm guest
	I0120 16:19:36.658302 2173708 out.go:177] * [kubernetes-upgrade-207056] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:19:36.659815 2173708 notify.go:220] Checking for updates...
	I0120 16:19:36.659858 2173708 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:19:36.661190 2173708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:19:36.662517 2173708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:19:36.664103 2173708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:19:36.665481 2173708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:19:36.666900 2173708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:19:36.668482 2173708 config.go:182] Loaded profile config "NoKubernetes-383886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:19:36.668585 2173708 config.go:182] Loaded profile config "cert-expiration-448539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:19:36.668704 2173708 config.go:182] Loaded profile config "force-systemd-env-417532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:19:36.668883 2173708 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:19:36.708175 2173708 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:19:36.709365 2173708 start.go:297] selected driver: kvm2
	I0120 16:19:36.709381 2173708 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:19:36.709400 2173708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:19:36.710432 2173708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:19:36.710533 2173708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:19:36.726756 2173708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:19:36.726836 2173708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:19:36.727180 2173708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 16:19:36.727225 2173708 cni.go:84] Creating CNI manager for ""
	I0120 16:19:36.727308 2173708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:19:36.727325 2173708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:19:36.727395 2173708 start.go:340] cluster config:
	{Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-207056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:19:36.727570 2173708 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:19:36.730277 2173708 out.go:177] * Starting "kubernetes-upgrade-207056" primary control-plane node in "kubernetes-upgrade-207056" cluster
	I0120 16:19:36.731503 2173708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:19:36.731550 2173708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:19:36.731558 2173708 cache.go:56] Caching tarball of preloaded images
	I0120 16:19:36.731699 2173708 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:19:36.731716 2173708 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 16:19:36.731836 2173708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/config.json ...
	I0120 16:19:36.731863 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/config.json: {Name:mkfb9c84dc02383a8af915902260424a1bc9dabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:19:36.732036 2173708 start.go:360] acquireMachinesLock for kubernetes-upgrade-207056: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:20:21.225577 2173708 start.go:364] duration metric: took 44.49346566s to acquireMachinesLock for "kubernetes-upgrade-207056"
	I0120 16:20:21.225678 2173708 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-207056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:20:21.225909 2173708 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:20:21.228280 2173708 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 16:20:21.228531 2173708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:20:21.228587 2173708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:20:21.247816 2173708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40727
	I0120 16:20:21.248398 2173708 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:20:21.249084 2173708 main.go:141] libmachine: Using API Version  1
	I0120 16:20:21.249116 2173708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:20:21.249472 2173708 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:20:21.249740 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:20:21.249909 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:21.250104 2173708 start.go:159] libmachine.API.Create for "kubernetes-upgrade-207056" (driver="kvm2")
	I0120 16:20:21.250144 2173708 client.go:168] LocalClient.Create starting
	I0120 16:20:21.250175 2173708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:20:21.250210 2173708 main.go:141] libmachine: Decoding PEM data...
	I0120 16:20:21.250233 2173708 main.go:141] libmachine: Parsing certificate...
	I0120 16:20:21.250290 2173708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:20:21.250317 2173708 main.go:141] libmachine: Decoding PEM data...
	I0120 16:20:21.250328 2173708 main.go:141] libmachine: Parsing certificate...
	I0120 16:20:21.250348 2173708 main.go:141] libmachine: Running pre-create checks...
	I0120 16:20:21.250363 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .PreCreateCheck
	I0120 16:20:21.250786 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetConfigRaw
	I0120 16:20:21.251225 2173708 main.go:141] libmachine: Creating machine...
	I0120 16:20:21.251238 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .Create
	I0120 16:20:21.251364 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) creating KVM machine...
	I0120 16:20:21.251385 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) creating network...
	I0120 16:20:21.252894 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found existing default KVM network
	I0120 16:20:21.254048 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.253865 2174272 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:0c:f2} reservation:<nil>}
	I0120 16:20:21.254893 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.254802 2174272 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:20:f4:9d} reservation:<nil>}
	I0120 16:20:21.257058 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.256860 2174272 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0120 16:20:21.257940 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.257842 2174272 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000304a40}
	I0120 16:20:21.257974 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | created network xml: 
	I0120 16:20:21.257985 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | <network>
	I0120 16:20:21.258019 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   <name>mk-kubernetes-upgrade-207056</name>
	I0120 16:20:21.258033 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   <dns enable='no'/>
	I0120 16:20:21.258040 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   
	I0120 16:20:21.258051 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 16:20:21.258058 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |     <dhcp>
	I0120 16:20:21.258075 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 16:20:21.258090 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |     </dhcp>
	I0120 16:20:21.258128 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   </ip>
	I0120 16:20:21.258151 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG |   
	I0120 16:20:21.258161 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | </network>
	I0120 16:20:21.258167 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | 
	I0120 16:20:21.264103 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | trying to create private KVM network mk-kubernetes-upgrade-207056 192.168.72.0/24...
	I0120 16:20:21.356684 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056 ...
	I0120 16:20:21.356742 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | private KVM network mk-kubernetes-upgrade-207056 192.168.72.0/24 created
	I0120 16:20:21.356755 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:20:21.356789 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.356596 2174272 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:20:21.356818 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:20:21.682874 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.682700 2174272 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa...
	I0120 16:20:21.974629 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.974427 2174272 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/kubernetes-upgrade-207056.rawdisk...
	I0120 16:20:21.974673 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | Writing magic tar header
	I0120 16:20:21.974691 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | Writing SSH key tar header
	I0120 16:20:21.974706 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:21.974621 2174272 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056 ...
	I0120 16:20:21.974850 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056
	I0120 16:20:21.974882 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:20:21.974897 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056 (perms=drwx------)
	I0120 16:20:21.974915 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:20:21.974925 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:20:21.974942 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:20:21.974970 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:20:21.974987 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:20:21.975000 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:20:21.975016 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:20:21.975029 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:20:21.975042 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) creating domain...
	I0120 16:20:21.975055 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home/jenkins
	I0120 16:20:21.975068 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | checking permissions on dir: /home
	I0120 16:20:21.975080 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | skipping /home - not owner
	I0120 16:20:21.976381 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) define libvirt domain using xml: 
	I0120 16:20:21.976403 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) <domain type='kvm'>
	I0120 16:20:21.976411 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <name>kubernetes-upgrade-207056</name>
	I0120 16:20:21.976419 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <memory unit='MiB'>2200</memory>
	I0120 16:20:21.976457 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <vcpu>2</vcpu>
	I0120 16:20:21.976478 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <features>
	I0120 16:20:21.976486 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <acpi/>
	I0120 16:20:21.976496 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <apic/>
	I0120 16:20:21.976529 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <pae/>
	I0120 16:20:21.976556 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     
	I0120 16:20:21.976565 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   </features>
	I0120 16:20:21.976574 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <cpu mode='host-passthrough'>
	I0120 16:20:21.976582 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   
	I0120 16:20:21.976602 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   </cpu>
	I0120 16:20:21.976628 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <os>
	I0120 16:20:21.976653 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <type>hvm</type>
	I0120 16:20:21.976662 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <boot dev='cdrom'/>
	I0120 16:20:21.976669 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <boot dev='hd'/>
	I0120 16:20:21.976679 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <bootmenu enable='no'/>
	I0120 16:20:21.976689 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   </os>
	I0120 16:20:21.976697 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   <devices>
	I0120 16:20:21.976706 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <disk type='file' device='cdrom'>
	I0120 16:20:21.976724 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/boot2docker.iso'/>
	I0120 16:20:21.976740 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <target dev='hdc' bus='scsi'/>
	I0120 16:20:21.976751 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <readonly/>
	I0120 16:20:21.976761 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </disk>
	I0120 16:20:21.976770 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <disk type='file' device='disk'>
	I0120 16:20:21.976783 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:20:21.976801 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/kubernetes-upgrade-207056.rawdisk'/>
	I0120 16:20:21.976816 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <target dev='hda' bus='virtio'/>
	I0120 16:20:21.976826 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </disk>
	I0120 16:20:21.976836 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <interface type='network'>
	I0120 16:20:21.976845 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <source network='mk-kubernetes-upgrade-207056'/>
	I0120 16:20:21.976866 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <model type='virtio'/>
	I0120 16:20:21.976874 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </interface>
	I0120 16:20:21.976882 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <interface type='network'>
	I0120 16:20:21.976915 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <source network='default'/>
	I0120 16:20:21.976941 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <model type='virtio'/>
	I0120 16:20:21.976956 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </interface>
	I0120 16:20:21.976972 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <serial type='pty'>
	I0120 16:20:21.976983 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <target port='0'/>
	I0120 16:20:21.977018 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </serial>
	I0120 16:20:21.977040 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <console type='pty'>
	I0120 16:20:21.977047 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <target type='serial' port='0'/>
	I0120 16:20:21.977055 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </console>
	I0120 16:20:21.977063 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     <rng model='virtio'>
	I0120 16:20:21.977071 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)       <backend model='random'>/dev/random</backend>
	I0120 16:20:21.977080 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     </rng>
	I0120 16:20:21.977092 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     
	I0120 16:20:21.977101 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)     
	I0120 16:20:21.977108 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056)   </devices>
	I0120 16:20:21.977116 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) </domain>
	I0120 16:20:21.977122 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) 
	I0120 16:20:21.981763 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:77:95:01 in network default
	I0120 16:20:21.982614 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:21.982634 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) starting domain...
	I0120 16:20:21.982653 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) ensuring networks are active...
	I0120 16:20:21.983486 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Ensuring network default is active
	I0120 16:20:21.984026 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Ensuring network mk-kubernetes-upgrade-207056 is active
	I0120 16:20:21.984902 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) getting domain XML...
	I0120 16:20:21.985752 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) creating domain...
	I0120 16:20:23.538639 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) waiting for IP...
	I0120 16:20:23.540055 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:23.540614 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:23.540703 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:23.540606 2174272 retry.go:31] will retry after 297.322375ms: waiting for domain to come up
	I0120 16:20:23.840139 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:23.840734 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:23.840793 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:23.840711 2174272 retry.go:31] will retry after 322.187568ms: waiting for domain to come up
	I0120 16:20:24.164169 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:24.164776 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:24.164806 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:24.164752 2174272 retry.go:31] will retry after 469.233955ms: waiting for domain to come up
	I0120 16:20:24.635685 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:24.636352 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:24.636379 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:24.636254 2174272 retry.go:31] will retry after 516.400088ms: waiting for domain to come up
	I0120 16:20:25.153979 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:25.154501 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:25.154537 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:25.154466 2174272 retry.go:31] will retry after 551.810389ms: waiting for domain to come up
	I0120 16:20:25.708127 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:25.708706 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:25.708738 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:25.708681 2174272 retry.go:31] will retry after 726.789901ms: waiting for domain to come up
	I0120 16:20:26.437184 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:26.437730 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:26.437767 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:26.437689 2174272 retry.go:31] will retry after 901.204811ms: waiting for domain to come up
	I0120 16:20:27.341261 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:27.341874 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:27.341907 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:27.341821 2174272 retry.go:31] will retry after 956.240181ms: waiting for domain to come up
	I0120 16:20:28.299977 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:28.300591 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:28.300624 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:28.300548 2174272 retry.go:31] will retry after 1.711352275s: waiting for domain to come up
	I0120 16:20:30.014757 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:30.015293 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:30.015359 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:30.015252 2174272 retry.go:31] will retry after 1.73379822s: waiting for domain to come up
	I0120 16:20:31.750315 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:31.750860 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:31.750891 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:31.750829 2174272 retry.go:31] will retry after 1.940349907s: waiting for domain to come up
	I0120 16:20:33.693985 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:33.694556 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:33.694586 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:33.694518 2174272 retry.go:31] will retry after 2.407405692s: waiting for domain to come up
	I0120 16:20:36.104052 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:36.104534 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:36.104565 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:36.104509 2174272 retry.go:31] will retry after 3.401949524s: waiting for domain to come up
	I0120 16:20:39.509508 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:39.509911 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find current IP address of domain kubernetes-upgrade-207056 in network mk-kubernetes-upgrade-207056
	I0120 16:20:39.509939 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | I0120 16:20:39.509880 2174272 retry.go:31] will retry after 5.112358765s: waiting for domain to come up
	I0120 16:20:44.623852 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.624409 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) found domain IP: 192.168.72.209
	I0120 16:20:44.624434 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) reserving static IP address...
	I0120 16:20:44.624468 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has current primary IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.624914 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-207056", mac: "52:54:00:e5:e5:bd", ip: "192.168.72.209"} in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.708477 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | Getting to WaitForSSH function...
	I0120 16:20:44.708510 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) reserved static IP address 192.168.72.209 for domain kubernetes-upgrade-207056
	I0120 16:20:44.708526 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) waiting for SSH...
	I0120 16:20:44.711257 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.711700 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:44.711738 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.711920 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | Using SSH client type: external
	I0120 16:20:44.711957 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa (-rw-------)
	I0120 16:20:44.712003 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:20:44.712025 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | About to run SSH command:
	I0120 16:20:44.712047 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | exit 0
	I0120 16:20:44.834757 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | SSH cmd err, output: <nil>: 
	I0120 16:20:44.834993 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) KVM machine creation complete
	I0120 16:20:44.835313 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetConfigRaw
	I0120 16:20:44.835901 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:44.836095 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:44.836236 2173708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:20:44.836251 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetState
	I0120 16:20:44.837570 2173708 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:20:44.837586 2173708 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:20:44.837592 2173708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:20:44.837598 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:44.839806 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.840220 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:44.840275 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.840375 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:44.840569 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:44.840729 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:44.840889 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:44.841095 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:44.841325 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:44.841344 2173708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:20:44.942387 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:20:44.942417 2173708 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:20:44.942428 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:44.946364 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.946875 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:44.946908 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:44.947107 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:44.947382 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:44.947582 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:44.947750 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:44.947957 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:44.948195 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:44.948214 2173708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:20:45.057179 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:20:45.057264 2173708 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:20:45.057287 2173708 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:20:45.057296 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:20:45.057601 2173708 buildroot.go:166] provisioning hostname "kubernetes-upgrade-207056"
	I0120 16:20:45.057636 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:20:45.057823 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.060989 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.061405 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.061434 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.061607 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:45.061812 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.061994 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.062162 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:45.062389 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:45.062631 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:45.062649 2173708 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-207056 && echo "kubernetes-upgrade-207056" | sudo tee /etc/hostname
	I0120 16:20:45.179496 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-207056
	
	I0120 16:20:45.179535 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.183018 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.183453 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.183492 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.183698 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:45.183916 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.184096 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.184261 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:45.184438 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:45.184626 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:45.184653 2173708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-207056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-207056/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-207056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:20:45.297102 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:20:45.297139 2173708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:20:45.297168 2173708 buildroot.go:174] setting up certificates
	I0120 16:20:45.297181 2173708 provision.go:84] configureAuth start
	I0120 16:20:45.297198 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:20:45.297488 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:20:45.300805 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.301211 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.301246 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.301435 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.304049 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.304465 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.304495 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.304618 2173708 provision.go:143] copyHostCerts
	I0120 16:20:45.304678 2173708 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:20:45.304705 2173708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:20:45.304759 2173708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:20:45.304842 2173708 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:20:45.304850 2173708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:20:45.304868 2173708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:20:45.304925 2173708 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:20:45.304937 2173708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:20:45.304956 2173708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:20:45.305000 2173708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-207056 san=[127.0.0.1 192.168.72.209 kubernetes-upgrade-207056 localhost minikube]
	I0120 16:20:45.499776 2173708 provision.go:177] copyRemoteCerts
	I0120 16:20:45.499836 2173708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:20:45.499865 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.502871 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.503193 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.503241 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.503438 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:45.503683 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.503852 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:45.504047 2173708 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:20:45.592915 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:20:45.620835 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 16:20:45.647959 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:20:45.673974 2173708 provision.go:87] duration metric: took 376.77477ms to configureAuth
	I0120 16:20:45.674006 2173708 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:20:45.674189 2173708 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:20:45.674272 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.677070 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.677319 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.677366 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.677623 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:45.677805 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.677950 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.678086 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:45.678240 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:45.678463 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:45.678479 2173708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:20:45.902341 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:20:45.902379 2173708 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:20:45.902390 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetURL
	I0120 16:20:45.904087 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | using libvirt version 6000000
	I0120 16:20:45.906280 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.906756 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.906789 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.906976 2173708 main.go:141] libmachine: Docker is up and running!
	I0120 16:20:45.906992 2173708 main.go:141] libmachine: Reticulating splines...
	I0120 16:20:45.907003 2173708 client.go:171] duration metric: took 24.656849186s to LocalClient.Create
	I0120 16:20:45.907042 2173708 start.go:167] duration metric: took 24.656941678s to libmachine.API.Create "kubernetes-upgrade-207056"
	I0120 16:20:45.907058 2173708 start.go:293] postStartSetup for "kubernetes-upgrade-207056" (driver="kvm2")
	I0120 16:20:45.907077 2173708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:20:45.907105 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:45.907412 2173708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:20:45.907447 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:45.909777 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.910132 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:45.910173 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:45.910265 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:45.910450 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:45.910592 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:45.910746 2173708 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:20:45.989987 2173708 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:20:45.994498 2173708 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:20:45.994532 2173708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:20:45.994637 2173708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:20:45.994749 2173708 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:20:45.994882 2173708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:20:46.005961 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:20:46.032788 2173708 start.go:296] duration metric: took 125.70403ms for postStartSetup
	I0120 16:20:46.032953 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetConfigRaw
	I0120 16:20:46.033625 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:20:46.036114 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.036433 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:46.036466 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.036809 2173708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/config.json ...
	I0120 16:20:46.037000 2173708 start.go:128] duration metric: took 24.811073454s to createHost
	I0120 16:20:46.037057 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:46.039432 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.039743 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:46.039767 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.039909 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:46.040101 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:46.040277 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:46.040428 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:46.040596 2173708 main.go:141] libmachine: Using SSH client type: native
	I0120 16:20:46.040784 2173708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:20:46.040798 2173708 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:20:46.143715 2173708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390046.122546145
	
	I0120 16:20:46.143748 2173708 fix.go:216] guest clock: 1737390046.122546145
	I0120 16:20:46.143755 2173708 fix.go:229] Guest: 2025-01-20 16:20:46.122546145 +0000 UTC Remote: 2025-01-20 16:20:46.037012108 +0000 UTC m=+69.447711965 (delta=85.534037ms)
	I0120 16:20:46.143789 2173708 fix.go:200] guest clock delta is within tolerance: 85.534037ms
	I0120 16:20:46.143794 2173708 start.go:83] releasing machines lock for "kubernetes-upgrade-207056", held for 24.918172143s
	I0120 16:20:46.143827 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:46.144158 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:20:46.147621 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.147993 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:46.148026 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.148216 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:46.148762 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:46.148962 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:20:46.149085 2173708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:20:46.149139 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:46.149207 2173708 ssh_runner.go:195] Run: cat /version.json
	I0120 16:20:46.149237 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:20:46.152528 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.152561 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.152946 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:46.152977 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.153011 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:46.153031 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:46.153182 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:46.153282 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:20:46.153370 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:46.153445 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:20:46.153501 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:46.153606 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:20:46.153695 2173708 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:20:46.153751 2173708 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:20:46.279344 2173708 ssh_runner.go:195] Run: systemctl --version
	I0120 16:20:46.286162 2173708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:20:46.449248 2173708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:20:46.456847 2173708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:20:46.456936 2173708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:20:46.475257 2173708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:20:46.475286 2173708 start.go:495] detecting cgroup driver to use...
	I0120 16:20:46.475380 2173708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:20:46.497430 2173708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:20:46.514209 2173708 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:20:46.514287 2173708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:20:46.530278 2173708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:20:46.545362 2173708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:20:46.673864 2173708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:20:46.825075 2173708 docker.go:233] disabling docker service ...
	I0120 16:20:46.825150 2173708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:20:46.841312 2173708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:20:46.855973 2173708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:20:46.998820 2173708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:20:47.142650 2173708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:20:47.158383 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:20:47.179657 2173708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 16:20:47.179780 2173708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:20:47.191198 2173708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:20:47.191275 2173708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:20:47.203614 2173708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:20:47.216759 2173708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:20:47.228379 2173708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:20:47.240320 2173708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:20:47.252303 2173708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:20:47.252377 2173708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:20:47.267016 2173708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:20:47.278310 2173708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:20:47.427270 2173708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:20:47.539600 2173708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:20:47.539681 2173708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:20:47.545015 2173708 start.go:563] Will wait 60s for crictl version
	I0120 16:20:47.545088 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:47.549351 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:20:47.589837 2173708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:20:47.589942 2173708 ssh_runner.go:195] Run: crio --version
	I0120 16:20:47.621595 2173708 ssh_runner.go:195] Run: crio --version
	I0120 16:20:47.656504 2173708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 16:20:47.657779 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:20:47.661083 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:47.661536 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:20:38 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:20:47.661581 2173708 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:20:47.661870 2173708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:20:47.666321 2173708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:20:47.679933 2173708 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-207056 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:20:47.680090 2173708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:20:47.680144 2173708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:20:47.723712 2173708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:20:47.723794 2173708 ssh_runner.go:195] Run: which lz4
	I0120 16:20:47.730439 2173708 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:20:47.737436 2173708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:20:47.737492 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 16:20:49.591551 2173708 crio.go:462] duration metric: took 1.861148784s to copy over tarball
	I0120 16:20:49.591654 2173708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:20:52.447908 2173708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.856213272s)
	I0120 16:20:52.447943 2173708 crio.go:469] duration metric: took 2.856351452s to extract the tarball
	I0120 16:20:52.447953 2173708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:20:52.492219 2173708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:20:52.557942 2173708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:20:52.557980 2173708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:20:52.558113 2173708 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:20:52.558114 2173708 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:52.558162 2173708 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:52.558226 2173708 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:52.558217 2173708 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 16:20:52.558125 2173708 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 16:20:52.558116 2173708 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:52.558651 2173708 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:52.560390 2173708 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:52.560407 2173708 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:52.560436 2173708 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 16:20:52.560444 2173708 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:52.560455 2173708 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 16:20:52.560474 2173708 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:52.560487 2173708 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:52.560480 2173708 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:20:52.792349 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:52.796824 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:52.804031 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:52.821327 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 16:20:52.822742 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:52.825142 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 16:20:52.849997 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:52.914252 2173708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 16:20:52.914318 2173708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:52.914362 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:52.925375 2173708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 16:20:52.925511 2173708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:52.925610 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:52.964144 2173708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 16:20:52.964205 2173708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:52.964273 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:53.021125 2173708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 16:20:53.021189 2173708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 16:20:53.021244 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:53.021368 2173708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 16:20:53.021405 2173708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:53.021436 2173708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 16:20:53.021478 2173708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 16:20:53.021523 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:53.021442 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:53.026399 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:53.026473 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:53.026479 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:53.026643 2173708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 16:20:53.026682 2173708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:53.026715 2173708 ssh_runner.go:195] Run: which crictl
	I0120 16:20:53.027333 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:53.044638 2173708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:20:53.148773 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:20:53.148801 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:20:53.148902 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:53.149037 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:53.149099 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:53.149118 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:53.149221 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:53.300841 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:20:53.300887 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:20:53.338879 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:20:53.350028 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:53.350092 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:20:53.350159 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:20:53.350196 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:20:53.470812 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:20:53.471095 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:20:53.517929 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 16:20:53.518112 2173708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:20:53.519945 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 16:20:53.520006 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 16:20:53.524022 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 16:20:53.585155 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 16:20:53.585164 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 16:20:53.587383 2173708 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 16:20:53.587461 2173708 cache_images.go:92] duration metric: took 1.029463924s to LoadCachedImages
	W0120 16:20:53.587552 2173708 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0120 16:20:53.587572 2173708 kubeadm.go:934] updating node { 192.168.72.209 8443 v1.20.0 crio true true} ...
	I0120 16:20:53.587742 2173708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-207056 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-207056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:20:53.587824 2173708 ssh_runner.go:195] Run: crio config
	I0120 16:20:53.637708 2173708 cni.go:84] Creating CNI manager for ""
	I0120 16:20:53.637743 2173708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:20:53.637757 2173708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:20:53.637783 2173708 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.209 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-207056 NodeName:kubernetes-upgrade-207056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 16:20:53.637970 2173708 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-207056"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:20:53.638071 2173708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 16:20:53.650472 2173708 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:20:53.650559 2173708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:20:53.662165 2173708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0120 16:20:53.680614 2173708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:20:53.699404 2173708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0120 16:20:53.718143 2173708 ssh_runner.go:195] Run: grep 192.168.72.209	control-plane.minikube.internal$ /etc/hosts
	I0120 16:20:53.722952 2173708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:20:53.737526 2173708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:20:53.884862 2173708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:20:53.904706 2173708 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056 for IP: 192.168.72.209
	I0120 16:20:53.904742 2173708 certs.go:194] generating shared ca certs ...
	I0120 16:20:53.904768 2173708 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:53.904985 2173708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:20:53.905073 2173708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:20:53.905089 2173708 certs.go:256] generating profile certs ...
	I0120 16:20:53.905168 2173708 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.key
	I0120 16:20:53.905209 2173708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.crt with IP's: []
	I0120 16:20:54.267978 2173708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.crt ...
	I0120 16:20:54.268014 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.crt: {Name:mk04c0353d0380fbe5b11834a3ff2602f696d911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.268191 2173708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.key ...
	I0120 16:20:54.268205 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.key: {Name:mke0f1244fabc54fdb1cf5f438695ca3d4839067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.268294 2173708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key.b35a4ac7
	I0120 16:20:54.268311 2173708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt.b35a4ac7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.209]
	I0120 16:20:54.490081 2173708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt.b35a4ac7 ...
	I0120 16:20:54.490123 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt.b35a4ac7: {Name:mkb279365fed11ce88672aa400aa0ac8bd2b0fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.490327 2173708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key.b35a4ac7 ...
	I0120 16:20:54.490345 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key.b35a4ac7: {Name:mk4ae9c1e28ff59b2c69e222ef5c138e3f9f7d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.490425 2173708 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt.b35a4ac7 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt
	I0120 16:20:54.490496 2173708 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key.b35a4ac7 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key
	I0120 16:20:54.490560 2173708 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key
	I0120 16:20:54.490577 2173708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.crt with IP's: []
	I0120 16:20:54.938509 2173708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.crt ...
	I0120 16:20:54.938549 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.crt: {Name:mk070e991ed6e57ddbf45c78e52007a80f18b3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.938777 2173708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key ...
	I0120 16:20:54.938798 2173708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key: {Name:mk7d2373f469ea684fb3a1fc43e25be0b41a510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:20:54.939009 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:20:54.939051 2173708 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:20:54.939063 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:20:54.939085 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:20:54.939108 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:20:54.939125 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:20:54.939161 2173708 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:20:54.939855 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:20:54.982162 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:20:55.025220 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:20:55.073224 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:20:55.106444 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 16:20:55.140590 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:20:55.168849 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:20:55.195570 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:20:55.225268 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:20:55.255311 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:20:55.288887 2173708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:20:55.321620 2173708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:20:55.340560 2173708 ssh_runner.go:195] Run: openssl version
	I0120 16:20:55.347581 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:20:55.360261 2173708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:20:55.365976 2173708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:20:55.366059 2173708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:20:55.372769 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:20:55.384496 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:20:55.396382 2173708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:20:55.401495 2173708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:20:55.401576 2173708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:20:55.409891 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:20:55.422249 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:20:55.434620 2173708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:20:55.439630 2173708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:20:55.439696 2173708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:20:55.445681 2173708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:20:55.457865 2173708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:20:55.462418 2173708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:20:55.462480 2173708 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-207056 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:20:55.462561 2173708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:20:55.462652 2173708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:20:55.508589 2173708 cri.go:89] found id: ""
	I0120 16:20:55.508683 2173708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:20:55.520586 2173708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:20:55.535649 2173708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:20:55.550364 2173708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:20:55.550395 2173708 kubeadm.go:157] found existing configuration files:
	
	I0120 16:20:55.550456 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:20:55.561555 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:20:55.561632 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:20:55.573232 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:20:55.583510 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:20:55.583594 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:20:55.597975 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:20:55.612004 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:20:55.612094 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:20:55.626926 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:20:55.638342 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:20:55.638420 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:20:55.648902 2173708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:20:56.024065 2173708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:22:53.679426 2173708 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:22:53.679576 2173708 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:22:53.681098 2173708 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:22:53.681171 2173708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:22:53.681266 2173708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:22:53.681408 2173708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:22:53.681547 2173708 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:22:53.681603 2173708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:22:53.683560 2173708 out.go:235]   - Generating certificates and keys ...
	I0120 16:22:53.683646 2173708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:22:53.683721 2173708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:22:53.683816 2173708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:22:53.683899 2173708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:22:53.683989 2173708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:22:53.684082 2173708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:22:53.684152 2173708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:22:53.684285 2173708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0120 16:22:53.684360 2173708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:22:53.684503 2173708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0120 16:22:53.684591 2173708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:22:53.684681 2173708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:22:53.684744 2173708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:22:53.684823 2173708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:22:53.684895 2173708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:22:53.684973 2173708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:22:53.685067 2173708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:22:53.685151 2173708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:22:53.685275 2173708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:22:53.685362 2173708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:22:53.685395 2173708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:22:53.685447 2173708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:22:53.687047 2173708 out.go:235]   - Booting up control plane ...
	I0120 16:22:53.687143 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:22:53.687241 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:22:53.687328 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:22:53.687403 2173708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:22:53.687542 2173708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:22:53.687616 2173708 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:22:53.687688 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:22:53.687857 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:22:53.687948 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:22:53.688108 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:22:53.688189 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:22:53.688357 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:22:53.688416 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:22:53.688573 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:22:53.688630 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:22:53.688782 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:22:53.688788 2173708 kubeadm.go:310] 
	I0120 16:22:53.688832 2173708 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:22:53.688871 2173708 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:22:53.688877 2173708 kubeadm.go:310] 
	I0120 16:22:53.688904 2173708 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:22:53.688947 2173708 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:22:53.689064 2173708 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:22:53.689072 2173708 kubeadm.go:310] 
	I0120 16:22:53.689167 2173708 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:22:53.689219 2173708 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:22:53.689246 2173708 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:22:53.689252 2173708 kubeadm.go:310] 
	I0120 16:22:53.689352 2173708 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:22:53.689418 2173708 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:22:53.689424 2173708 kubeadm.go:310] 
	I0120 16:22:53.689507 2173708 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:22:53.689576 2173708 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:22:53.689643 2173708 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:22:53.689708 2173708 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:22:53.689731 2173708 kubeadm.go:310] 
	W0120 16:22:53.689834 2173708 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-207056 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 16:22:53.689880 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 16:22:55.598438 2173708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.908528441s)
	I0120 16:22:55.598534 2173708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:22:55.614277 2173708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:22:55.625677 2173708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:22:55.625710 2173708 kubeadm.go:157] found existing configuration files:
	
	I0120 16:22:55.625763 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:22:55.635975 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:22:55.636052 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:22:55.646626 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:22:55.656595 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:22:55.656654 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:22:55.666973 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:22:55.677020 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:22:55.677093 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:22:55.687355 2173708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:22:55.697357 2173708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:22:55.697428 2173708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:22:55.707562 2173708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:22:55.787296 2173708 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:22:55.787455 2173708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:22:55.938986 2173708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:22:55.939140 2173708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:22:55.939279 2173708 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:22:56.140400 2173708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:22:56.142484 2173708 out.go:235]   - Generating certificates and keys ...
	I0120 16:22:56.142643 2173708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:22:56.142735 2173708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:22:56.142854 2173708 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 16:22:56.142966 2173708 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 16:22:56.143079 2173708 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 16:22:56.143171 2173708 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 16:22:56.143313 2173708 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 16:22:56.143693 2173708 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 16:22:56.144131 2173708 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 16:22:56.144605 2173708 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 16:22:56.144668 2173708 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 16:22:56.144742 2173708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:22:56.891630 2173708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:22:57.258613 2173708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:22:57.429285 2173708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:22:57.706116 2173708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:22:57.726222 2173708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:22:57.729140 2173708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:22:57.729319 2173708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:22:57.908269 2173708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:22:57.910555 2173708 out.go:235]   - Booting up control plane ...
	I0120 16:22:57.910721 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:22:57.917133 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:22:57.918068 2173708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:22:57.918832 2173708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:22:57.921607 2173708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:23:37.923341 2173708 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:23:37.923840 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:23:37.924090 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:23:42.924572 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:23:42.924878 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:23:52.925182 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:23:52.925501 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:24:12.926697 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:24:12.927091 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:24:52.926995 2173708 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:24:52.927377 2173708 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:24:52.927389 2173708 kubeadm.go:310] 
	I0120 16:24:52.927499 2173708 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:24:52.927586 2173708 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:24:52.927596 2173708 kubeadm.go:310] 
	I0120 16:24:52.927645 2173708 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:24:52.927694 2173708 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:24:52.927830 2173708 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:24:52.927838 2173708 kubeadm.go:310] 
	I0120 16:24:52.927975 2173708 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:24:52.928019 2173708 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:24:52.928098 2173708 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:24:52.928117 2173708 kubeadm.go:310] 
	I0120 16:24:52.928304 2173708 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:24:52.928438 2173708 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:24:52.928454 2173708 kubeadm.go:310] 
	I0120 16:24:52.928573 2173708 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:24:52.928679 2173708 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:24:52.928774 2173708 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:24:52.928865 2173708 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:24:52.928878 2173708 kubeadm.go:310] 
	I0120 16:24:52.952347 2173708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:24:52.952492 2173708 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:24:52.952603 2173708 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:24:52.952828 2173708 kubeadm.go:394] duration metric: took 3m57.490350067s to StartCluster
	I0120 16:24:52.952936 2173708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:24:52.953030 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:24:53.017142 2173708 cri.go:89] found id: ""
	I0120 16:24:53.017172 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.017184 2173708 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:24:53.017192 2173708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:24:53.017264 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:24:53.055170 2173708 cri.go:89] found id: ""
	I0120 16:24:53.055209 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.055229 2173708 logs.go:284] No container was found matching "etcd"
	I0120 16:24:53.055237 2173708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:24:53.055317 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:24:53.106830 2173708 cri.go:89] found id: ""
	I0120 16:24:53.106866 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.106879 2173708 logs.go:284] No container was found matching "coredns"
	I0120 16:24:53.106887 2173708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:24:53.106960 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:24:53.142364 2173708 cri.go:89] found id: ""
	I0120 16:24:53.142401 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.142414 2173708 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:24:53.142423 2173708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:24:53.142502 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:24:53.191560 2173708 cri.go:89] found id: ""
	I0120 16:24:53.191596 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.191608 2173708 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:24:53.191617 2173708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:24:53.191689 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:24:53.241726 2173708 cri.go:89] found id: ""
	I0120 16:24:53.241759 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.241770 2173708 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:24:53.241778 2173708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:24:53.241841 2173708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:24:53.287750 2173708 cri.go:89] found id: ""
	I0120 16:24:53.287788 2173708 logs.go:282] 0 containers: []
	W0120 16:24:53.287799 2173708 logs.go:284] No container was found matching "kindnet"
	I0120 16:24:53.287814 2173708 logs.go:123] Gathering logs for kubelet ...
	I0120 16:24:53.287833 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:24:53.357459 2173708 logs.go:123] Gathering logs for dmesg ...
	I0120 16:24:53.357502 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:24:53.375840 2173708 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:24:53.375870 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:24:53.533388 2173708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:24:53.533417 2173708 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:24:53.533431 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:24:53.665669 2173708 logs.go:123] Gathering logs for container status ...
	I0120 16:24:53.665709 2173708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0120 16:24:53.716076 2173708 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 16:24:53.716159 2173708 out.go:270] * 
	* 
	W0120 16:24:53.716243 2173708 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:24:53.716262 2173708 out.go:270] * 
	* 
	W0120 16:24:53.717026 2173708 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:24:53.720373 2173708 out.go:201] 
	W0120 16:24:53.722058 2173708 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:24:53.722102 2173708 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 16:24:53.722125 2173708 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 16:24:53.723662 2173708 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-207056
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-207056: (2.400118943s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-207056 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-207056 status --format={{.Host}}: exit status 7 (88.226542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.044851113s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-207056 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.328607ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-207056] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-207056
	    minikube start -p kubernetes-upgrade-207056 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2070562 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-207056 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-207056 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.168781268s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-20 16:26:58.655240709 +0000 UTC m=+4937.942671897
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-207056 -n kubernetes-upgrade-207056
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-207056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-207056 logs -n 25: (2.141300491s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo cat                            | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo docker                         | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo cat                            | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo cat                            | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo cat                            | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo cat                            | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo                                | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-708138 sudo crio                           | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-708138                                     | cilium-708138             | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC | 20 Jan 25 16:25 UTC |
	| start   | -p old-k8s-version-806597                            | old-k8s-version-806597    | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p no-preload-552545                                 | no-preload-552545         | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-207056                         | kubernetes-upgrade-207056 | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-207056                         | kubernetes-upgrade-207056 | jenkins | v1.35.0 | 20 Jan 25 16:25 UTC | 20 Jan 25 16:26 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-435922 ssh                              | cert-options-435922       | jenkins | v1.35.0 | 20 Jan 25 16:26 UTC | 20 Jan 25 16:26 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-435922 -- sudo                       | cert-options-435922       | jenkins | v1.35.0 | 20 Jan 25 16:26 UTC | 20 Jan 25 16:26 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-435922                               | cert-options-435922       | jenkins | v1.35.0 | 20 Jan 25 16:26 UTC | 20 Jan 25 16:26 UTC |
	| start   | -p embed-certs-429406                                | embed-certs-429406        | jenkins | v1.35.0 | 20 Jan 25 16:26 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:26:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:26:03.811481 2181167 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:26:03.811760 2181167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:26:03.811771 2181167 out.go:358] Setting ErrFile to fd 2...
	I0120 16:26:03.811778 2181167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:26:03.812021 2181167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:26:03.812665 2181167 out.go:352] Setting JSON to false
	I0120 16:26:03.813749 2181167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29310,"bootTime":1737361054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:26:03.813862 2181167 start.go:139] virtualization: kvm guest
	I0120 16:26:03.816108 2181167 out.go:177] * [embed-certs-429406] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:26:03.817487 2181167 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:26:03.817513 2181167 notify.go:220] Checking for updates...
	I0120 16:26:03.820332 2181167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:26:03.821677 2181167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:26:03.822883 2181167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:26:03.824103 2181167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:26:03.825447 2181167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:26:03.827240 2181167 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:26:03.827463 2181167 config.go:182] Loaded profile config "no-preload-552545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:26:03.827572 2181167 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:26:03.827693 2181167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:26:03.866126 2181167 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:26:03.867453 2181167 start.go:297] selected driver: kvm2
	I0120 16:26:03.867476 2181167 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:26:03.867491 2181167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:26:03.868259 2181167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:26:03.868361 2181167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:26:03.884373 2181167 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:26:03.884420 2181167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:26:03.884694 2181167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:26:03.884728 2181167 cni.go:84] Creating CNI manager for ""
	I0120 16:26:03.884778 2181167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:26:03.884790 2181167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:26:03.884862 2181167 start.go:340] cluster config:
	{Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0120 16:26:03.884973 2181167 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:26:03.887478 2181167 out.go:177] * Starting "embed-certs-429406" primary control-plane node in "embed-certs-429406" cluster
	I0120 16:26:01.951815 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:01.952324 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:26:01.952385 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:26:01.952291 2180805 retry.go:31] will retry after 4.071957347s: waiting for domain to come up
	I0120 16:26:07.771863 2180359 start.go:364] duration metric: took 57.432073118s to acquireMachinesLock for "no-preload-552545"
	I0120 16:26:07.771939 2180359 start.go:93] Provisioning new machine with config: &{Name:no-preload-552545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-552545
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:26:07.772102 2180359 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:26:03.888648 2181167 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:26:03.888692 2181167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:26:03.888702 2181167 cache.go:56] Caching tarball of preloaded images
	I0120 16:26:03.888834 2181167 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:26:03.888849 2181167 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:26:03.888954 2181167 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/config.json ...
	I0120 16:26:03.888976 2181167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/config.json: {Name:mk3a7f87591adbef3649da6044c083e3f0d2b3ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:03.889154 2181167 start.go:360] acquireMachinesLock for embed-certs-429406: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:26:06.028371 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.029058 2180315 main.go:141] libmachine: (old-k8s-version-806597) found domain IP: 192.168.50.241
	I0120 16:26:06.029085 2180315 main.go:141] libmachine: (old-k8s-version-806597) reserving static IP address...
	I0120 16:26:06.029117 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has current primary IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.029522 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-806597", mac: "52:54:00:02:1a:c1", ip: "192.168.50.241"} in network mk-old-k8s-version-806597
	I0120 16:26:06.116699 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Getting to WaitForSSH function...
	I0120 16:26:06.116728 2180315 main.go:141] libmachine: (old-k8s-version-806597) reserved static IP address 192.168.50.241 for domain old-k8s-version-806597
	I0120 16:26:06.116741 2180315 main.go:141] libmachine: (old-k8s-version-806597) waiting for SSH...
	I0120 16:26:06.119622 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.120095 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.120129 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.120303 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH client type: external
	I0120 16:26:06.120335 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa (-rw-------)
	I0120 16:26:06.120409 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:26:06.120431 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | About to run SSH command:
	I0120 16:26:06.120441 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | exit 0
	I0120 16:26:06.251348 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | SSH cmd err, output: <nil>: 
	I0120 16:26:06.251687 2180315 main.go:141] libmachine: (old-k8s-version-806597) KVM machine creation complete
	I0120 16:26:06.251979 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:26:06.252739 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:06.253017 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:06.253247 2180315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:26:06.253261 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetState
	I0120 16:26:06.254645 2180315 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:26:06.254663 2180315 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:26:06.254669 2180315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:26:06.254675 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.257222 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.257574 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.257620 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.257743 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.257991 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.258208 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.258391 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.258571 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.258830 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.258847 2180315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:26:06.370228 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:06.370253 2180315 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:26:06.370261 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.373176 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.373546 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.373570 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.373778 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.374027 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.374191 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.374367 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.374532 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.374772 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.374787 2180315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:26:06.487716 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:26:06.487797 2180315 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:26:06.487812 2180315 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:26:06.487822 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.488060 2180315 buildroot.go:166] provisioning hostname "old-k8s-version-806597"
	I0120 16:26:06.488102 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.488215 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.491033 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.491430 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.491461 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.491618 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.491798 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.491958 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.492108 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.492320 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.492530 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.492559 2180315 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-806597 && echo "old-k8s-version-806597" | sudo tee /etc/hostname
	I0120 16:26:06.622625 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-806597
	
	I0120 16:26:06.622683 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.625695 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.626064 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.626093 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.626318 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.626542 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.626719 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.626838 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.627006 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.627249 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.627268 2180315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-806597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-806597/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-806597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:26:06.750870 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:06.750917 2180315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:26:06.750943 2180315 buildroot.go:174] setting up certificates
	I0120 16:26:06.750959 2180315 provision.go:84] configureAuth start
	I0120 16:26:06.750979 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.751306 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:06.754453 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.754849 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.754886 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.755018 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.757590 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.757935 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.757965 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.758172 2180315 provision.go:143] copyHostCerts
	I0120 16:26:06.758244 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:26:06.758258 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:26:06.758329 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:26:06.758465 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:26:06.758476 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:26:06.758501 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:26:06.758594 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:26:06.758623 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:26:06.758655 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:26:06.758745 2180315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-806597 san=[127.0.0.1 192.168.50.241 localhost minikube old-k8s-version-806597]
	I0120 16:26:07.098838 2180315 provision.go:177] copyRemoteCerts
	I0120 16:26:07.098934 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:26:07.098970 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.101838 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.102155 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.102181 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.102361 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.102576 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.102760 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.102866 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.190087 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:26:07.214751 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 16:26:07.240207 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:26:07.266308 2180315 provision.go:87] duration metric: took 515.329527ms to configureAuth
	I0120 16:26:07.266342 2180315 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:26:07.266540 2180315 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:26:07.266653 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.269551 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.269905 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.269938 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.270096 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.270354 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.270557 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.270747 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.270943 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:07.271130 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:07.271146 2180315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:26:07.509188 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:26:07.509227 2180315 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:26:07.509241 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetURL
	I0120 16:26:07.510661 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | using libvirt version 6000000
	I0120 16:26:07.513007 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.513374 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.513417 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.513621 2180315 main.go:141] libmachine: Docker is up and running!
	I0120 16:26:07.513643 2180315 main.go:141] libmachine: Reticulating splines...
	I0120 16:26:07.513672 2180315 client.go:171] duration metric: took 24.871443012s to LocalClient.Create
	I0120 16:26:07.513703 2180315 start.go:167] duration metric: took 24.87153796s to libmachine.API.Create "old-k8s-version-806597"
	I0120 16:26:07.513715 2180315 start.go:293] postStartSetup for "old-k8s-version-806597" (driver="kvm2")
	I0120 16:26:07.513729 2180315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:26:07.513749 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.514044 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:26:07.514072 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.516543 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.516855 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.516880 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.517133 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.517362 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.517578 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.517719 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.606722 2180315 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:26:07.611484 2180315 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:26:07.611519 2180315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:26:07.611602 2180315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:26:07.611675 2180315 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:26:07.611800 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:26:07.621801 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:07.648048 2180315 start.go:296] duration metric: took 134.299886ms for postStartSetup
	I0120 16:26:07.648126 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:26:07.648839 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:07.651615 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.651998 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.652026 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.652217 2180315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json ...
	I0120 16:26:07.652489 2180315 start.go:128] duration metric: took 25.03232575s to createHost
	I0120 16:26:07.652518 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.654957 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.655361 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.655389 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.655513 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.655746 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.655900 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.656069 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.656222 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:07.656387 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:07.656397 2180315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:26:07.771640 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390367.742600560
	
	I0120 16:26:07.771676 2180315 fix.go:216] guest clock: 1737390367.742600560
	I0120 16:26:07.771684 2180315 fix.go:229] Guest: 2025-01-20 16:26:07.74260056 +0000 UTC Remote: 2025-01-20 16:26:07.652504125 +0000 UTC m=+61.859229819 (delta=90.096435ms)
	I0120 16:26:07.771709 2180315 fix.go:200] guest clock delta is within tolerance: 90.096435ms
	I0120 16:26:07.771717 2180315 start.go:83] releasing machines lock for "old-k8s-version-806597", held for 25.151752748s
	I0120 16:26:07.771752 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.772033 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:07.775217 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.775707 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.775749 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.776022 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.776781 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.777036 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.777158 2180315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:26:07.777222 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.777536 2180315 ssh_runner.go:195] Run: cat /version.json
	I0120 16:26:07.777560 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.780250 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780595 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780643 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.780675 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780825 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.780957 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.780981 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780990 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.781150 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.781157 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.781323 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.781315 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.781494 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.781648 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.864771 2180315 ssh_runner.go:195] Run: systemctl --version
	I0120 16:26:07.892210 2180315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:26:08.068848 2180315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:26:08.075723 2180315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:26:08.075810 2180315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:26:08.093929 2180315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:26:08.093977 2180315 start.go:495] detecting cgroup driver to use...
	I0120 16:26:08.094099 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:26:08.118924 2180315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:26:08.139602 2180315 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:26:08.139676 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:26:08.155249 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:26:08.170466 2180315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:26:08.298339 2180315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:26:08.482682 2180315 docker.go:233] disabling docker service ...
	I0120 16:26:08.482763 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:26:08.503903 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:26:08.517728 2180315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:26:08.655481 2180315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:26:08.810102 2180315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:26:08.825925 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:26:08.846193 2180315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 16:26:08.846277 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.857437 2180315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:26:08.857539 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.869364 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.881019 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.892614 2180315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:26:08.904669 2180315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:26:08.915983 2180315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:26:08.916058 2180315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:26:08.933201 2180315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:26:08.943616 2180315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:09.105895 2180315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:26:09.227633 2180315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:26:09.227733 2180315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:26:09.235332 2180315 start.go:563] Will wait 60s for crictl version
	I0120 16:26:09.235428 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:09.240095 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:26:09.292885 2180315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:26:09.293039 2180315 ssh_runner.go:195] Run: crio --version
	I0120 16:26:09.324814 2180315 ssh_runner.go:195] Run: crio --version
	I0120 16:26:09.358115 2180315 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 16:26:07.774592 2180359 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 16:26:07.774850 2180359 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:26:07.774883 2180359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:26:07.792726 2180359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0120 16:26:07.793356 2180359 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:26:07.794069 2180359 main.go:141] libmachine: Using API Version  1
	I0120 16:26:07.794099 2180359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:26:07.794453 2180359 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:26:07.794690 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetMachineName
	I0120 16:26:07.794863 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:07.795023 2180359 start.go:159] libmachine.API.Create for "no-preload-552545" (driver="kvm2")
	I0120 16:26:07.795065 2180359 client.go:168] LocalClient.Create starting
	I0120 16:26:07.795104 2180359 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:26:07.795155 2180359 main.go:141] libmachine: Decoding PEM data...
	I0120 16:26:07.795178 2180359 main.go:141] libmachine: Parsing certificate...
	I0120 16:26:07.795250 2180359 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:26:07.795281 2180359 main.go:141] libmachine: Decoding PEM data...
	I0120 16:26:07.795296 2180359 main.go:141] libmachine: Parsing certificate...
	I0120 16:26:07.795323 2180359 main.go:141] libmachine: Running pre-create checks...
	I0120 16:26:07.795336 2180359 main.go:141] libmachine: (no-preload-552545) Calling .PreCreateCheck
	I0120 16:26:07.795691 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetConfigRaw
	I0120 16:26:07.796113 2180359 main.go:141] libmachine: Creating machine...
	I0120 16:26:07.796130 2180359 main.go:141] libmachine: (no-preload-552545) Calling .Create
	I0120 16:26:07.796300 2180359 main.go:141] libmachine: (no-preload-552545) creating KVM machine...
	I0120 16:26:07.796319 2180359 main.go:141] libmachine: (no-preload-552545) creating network...
	I0120 16:26:07.797786 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found existing default KVM network
	I0120 16:26:07.799192 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:07.799010 2181234 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00021f5d0}
	I0120 16:26:07.799217 2180359 main.go:141] libmachine: (no-preload-552545) DBG | created network xml: 
	I0120 16:26:07.799231 2180359 main.go:141] libmachine: (no-preload-552545) DBG | <network>
	I0120 16:26:07.799244 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   <name>mk-no-preload-552545</name>
	I0120 16:26:07.799253 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   <dns enable='no'/>
	I0120 16:26:07.799260 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   
	I0120 16:26:07.799277 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 16:26:07.799285 2180359 main.go:141] libmachine: (no-preload-552545) DBG |     <dhcp>
	I0120 16:26:07.799298 2180359 main.go:141] libmachine: (no-preload-552545) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 16:26:07.799313 2180359 main.go:141] libmachine: (no-preload-552545) DBG |     </dhcp>
	I0120 16:26:07.799321 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   </ip>
	I0120 16:26:07.799330 2180359 main.go:141] libmachine: (no-preload-552545) DBG |   
	I0120 16:26:07.799337 2180359 main.go:141] libmachine: (no-preload-552545) DBG | </network>
	I0120 16:26:07.799347 2180359 main.go:141] libmachine: (no-preload-552545) DBG | 
	I0120 16:26:07.804979 2180359 main.go:141] libmachine: (no-preload-552545) DBG | trying to create private KVM network mk-no-preload-552545 192.168.39.0/24...
	I0120 16:26:07.886189 2180359 main.go:141] libmachine: (no-preload-552545) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545 ...
	I0120 16:26:07.886227 2180359 main.go:141] libmachine: (no-preload-552545) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:26:07.886239 2180359 main.go:141] libmachine: (no-preload-552545) DBG | private KVM network mk-no-preload-552545 192.168.39.0/24 created
	I0120 16:26:07.886271 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:07.886116 2181234 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:26:07.886334 2180359 main.go:141] libmachine: (no-preload-552545) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:26:08.221276 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:08.221049 2181234 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa...
	I0120 16:26:08.432723 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:08.432545 2181234 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/no-preload-552545.rawdisk...
	I0120 16:26:08.432763 2180359 main.go:141] libmachine: (no-preload-552545) DBG | Writing magic tar header
	I0120 16:26:08.432780 2180359 main.go:141] libmachine: (no-preload-552545) DBG | Writing SSH key tar header
	I0120 16:26:08.432794 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:08.432712 2181234 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545 ...
	I0120 16:26:08.432914 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545
	I0120 16:26:08.432954 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545 (perms=drwx------)
	I0120 16:26:08.432967 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:26:08.432979 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:26:08.433002 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:26:08.433016 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:26:08.433063 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:26:08.433084 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:26:08.433094 2180359 main.go:141] libmachine: (no-preload-552545) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:26:08.433148 2180359 main.go:141] libmachine: (no-preload-552545) creating domain...
	I0120 16:26:08.433175 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:26:08.433194 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:26:08.433211 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home/jenkins
	I0120 16:26:08.433226 2180359 main.go:141] libmachine: (no-preload-552545) DBG | checking permissions on dir: /home
	I0120 16:26:08.433234 2180359 main.go:141] libmachine: (no-preload-552545) DBG | skipping /home - not owner
	I0120 16:26:08.434481 2180359 main.go:141] libmachine: (no-preload-552545) define libvirt domain using xml: 
	I0120 16:26:08.434498 2180359 main.go:141] libmachine: (no-preload-552545) <domain type='kvm'>
	I0120 16:26:08.434508 2180359 main.go:141] libmachine: (no-preload-552545)   <name>no-preload-552545</name>
	I0120 16:26:08.434515 2180359 main.go:141] libmachine: (no-preload-552545)   <memory unit='MiB'>2200</memory>
	I0120 16:26:08.434523 2180359 main.go:141] libmachine: (no-preload-552545)   <vcpu>2</vcpu>
	I0120 16:26:08.434530 2180359 main.go:141] libmachine: (no-preload-552545)   <features>
	I0120 16:26:08.434540 2180359 main.go:141] libmachine: (no-preload-552545)     <acpi/>
	I0120 16:26:08.434547 2180359 main.go:141] libmachine: (no-preload-552545)     <apic/>
	I0120 16:26:08.434564 2180359 main.go:141] libmachine: (no-preload-552545)     <pae/>
	I0120 16:26:08.434580 2180359 main.go:141] libmachine: (no-preload-552545)     
	I0120 16:26:08.434588 2180359 main.go:141] libmachine: (no-preload-552545)   </features>
	I0120 16:26:08.434593 2180359 main.go:141] libmachine: (no-preload-552545)   <cpu mode='host-passthrough'>
	I0120 16:26:08.434598 2180359 main.go:141] libmachine: (no-preload-552545)   
	I0120 16:26:08.434618 2180359 main.go:141] libmachine: (no-preload-552545)   </cpu>
	I0120 16:26:08.434651 2180359 main.go:141] libmachine: (no-preload-552545)   <os>
	I0120 16:26:08.434674 2180359 main.go:141] libmachine: (no-preload-552545)     <type>hvm</type>
	I0120 16:26:08.434681 2180359 main.go:141] libmachine: (no-preload-552545)     <boot dev='cdrom'/>
	I0120 16:26:08.434689 2180359 main.go:141] libmachine: (no-preload-552545)     <boot dev='hd'/>
	I0120 16:26:08.434714 2180359 main.go:141] libmachine: (no-preload-552545)     <bootmenu enable='no'/>
	I0120 16:26:08.434732 2180359 main.go:141] libmachine: (no-preload-552545)   </os>
	I0120 16:26:08.434744 2180359 main.go:141] libmachine: (no-preload-552545)   <devices>
	I0120 16:26:08.434752 2180359 main.go:141] libmachine: (no-preload-552545)     <disk type='file' device='cdrom'>
	I0120 16:26:08.434769 2180359 main.go:141] libmachine: (no-preload-552545)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/boot2docker.iso'/>
	I0120 16:26:08.434780 2180359 main.go:141] libmachine: (no-preload-552545)       <target dev='hdc' bus='scsi'/>
	I0120 16:26:08.434792 2180359 main.go:141] libmachine: (no-preload-552545)       <readonly/>
	I0120 16:26:08.434805 2180359 main.go:141] libmachine: (no-preload-552545)     </disk>
	I0120 16:26:08.434819 2180359 main.go:141] libmachine: (no-preload-552545)     <disk type='file' device='disk'>
	I0120 16:26:08.434832 2180359 main.go:141] libmachine: (no-preload-552545)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:26:08.434848 2180359 main.go:141] libmachine: (no-preload-552545)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/no-preload-552545.rawdisk'/>
	I0120 16:26:08.434859 2180359 main.go:141] libmachine: (no-preload-552545)       <target dev='hda' bus='virtio'/>
	I0120 16:26:08.434867 2180359 main.go:141] libmachine: (no-preload-552545)     </disk>
	I0120 16:26:08.434882 2180359 main.go:141] libmachine: (no-preload-552545)     <interface type='network'>
	I0120 16:26:08.434895 2180359 main.go:141] libmachine: (no-preload-552545)       <source network='mk-no-preload-552545'/>
	I0120 16:26:08.434907 2180359 main.go:141] libmachine: (no-preload-552545)       <model type='virtio'/>
	I0120 16:26:08.434919 2180359 main.go:141] libmachine: (no-preload-552545)     </interface>
	I0120 16:26:08.434929 2180359 main.go:141] libmachine: (no-preload-552545)     <interface type='network'>
	I0120 16:26:08.434957 2180359 main.go:141] libmachine: (no-preload-552545)       <source network='default'/>
	I0120 16:26:08.434972 2180359 main.go:141] libmachine: (no-preload-552545)       <model type='virtio'/>
	I0120 16:26:08.434984 2180359 main.go:141] libmachine: (no-preload-552545)     </interface>
	I0120 16:26:08.434994 2180359 main.go:141] libmachine: (no-preload-552545)     <serial type='pty'>
	I0120 16:26:08.435005 2180359 main.go:141] libmachine: (no-preload-552545)       <target port='0'/>
	I0120 16:26:08.435015 2180359 main.go:141] libmachine: (no-preload-552545)     </serial>
	I0120 16:26:08.435026 2180359 main.go:141] libmachine: (no-preload-552545)     <console type='pty'>
	I0120 16:26:08.435034 2180359 main.go:141] libmachine: (no-preload-552545)       <target type='serial' port='0'/>
	I0120 16:26:08.435041 2180359 main.go:141] libmachine: (no-preload-552545)     </console>
	I0120 16:26:08.435047 2180359 main.go:141] libmachine: (no-preload-552545)     <rng model='virtio'>
	I0120 16:26:08.435055 2180359 main.go:141] libmachine: (no-preload-552545)       <backend model='random'>/dev/random</backend>
	I0120 16:26:08.435062 2180359 main.go:141] libmachine: (no-preload-552545)     </rng>
	I0120 16:26:08.435073 2180359 main.go:141] libmachine: (no-preload-552545)     
	I0120 16:26:08.435090 2180359 main.go:141] libmachine: (no-preload-552545)     
	I0120 16:26:08.435102 2180359 main.go:141] libmachine: (no-preload-552545)   </devices>
	I0120 16:26:08.435141 2180359 main.go:141] libmachine: (no-preload-552545) </domain>
	I0120 16:26:08.435154 2180359 main.go:141] libmachine: (no-preload-552545) 
	I0120 16:26:08.439820 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:82:05:4b in network default
	I0120 16:26:08.440432 2180359 main.go:141] libmachine: (no-preload-552545) starting domain...
	I0120 16:26:08.440459 2180359 main.go:141] libmachine: (no-preload-552545) ensuring networks are active...
	I0120 16:26:08.440470 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:08.441169 2180359 main.go:141] libmachine: (no-preload-552545) Ensuring network default is active
	I0120 16:26:08.441495 2180359 main.go:141] libmachine: (no-preload-552545) Ensuring network mk-no-preload-552545 is active
	I0120 16:26:08.442144 2180359 main.go:141] libmachine: (no-preload-552545) getting domain XML...
	I0120 16:26:08.443098 2180359 main.go:141] libmachine: (no-preload-552545) creating domain...
	I0120 16:26:09.767593 2180359 main.go:141] libmachine: (no-preload-552545) waiting for IP...
	I0120 16:26:09.768626 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:09.769113 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:09.769194 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:09.769118 2181234 retry.go:31] will retry after 250.751261ms: waiting for domain to come up
	I0120 16:26:10.021990 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:10.022662 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:10.022707 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:10.022668 2181234 retry.go:31] will retry after 286.591714ms: waiting for domain to come up
	I0120 16:26:09.359602 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:09.363063 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:09.363567 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:09.363603 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:09.363883 2180315 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 16:26:09.368562 2180315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:26:09.381987 2180315 kubeadm.go:883] updating cluster {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:26:09.382129 2180315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:26:09.382187 2180315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:09.419218 2180315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:26:09.419289 2180315 ssh_runner.go:195] Run: which lz4
	I0120 16:26:09.423711 2180315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:26:09.428502 2180315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:26:09.428538 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 16:26:10.311433 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:10.312110 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:10.312144 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:10.312076 2181234 retry.go:31] will retry after 309.3865ms: waiting for domain to come up
	I0120 16:26:10.623613 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:10.624210 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:10.624247 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:10.624171 2181234 retry.go:31] will retry after 549.530934ms: waiting for domain to come up
	I0120 16:26:11.175234 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:11.175805 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:11.175837 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:11.175790 2181234 retry.go:31] will retry after 513.373725ms: waiting for domain to come up
	I0120 16:26:11.690758 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:11.691243 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:11.691270 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:11.691203 2181234 retry.go:31] will retry after 801.587218ms: waiting for domain to come up
	I0120 16:26:12.494411 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:12.494993 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:12.495054 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:12.494944 2181234 retry.go:31] will retry after 1.031834951s: waiting for domain to come up
	I0120 16:26:13.528965 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:13.529441 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:13.529478 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:13.529397 2181234 retry.go:31] will retry after 906.946496ms: waiting for domain to come up
	I0120 16:26:14.437942 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:14.438571 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:14.438617 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:14.438515 2181234 retry.go:31] will retry after 1.264816004s: waiting for domain to come up
	I0120 16:26:11.255602 2180315 crio.go:462] duration metric: took 1.831932276s to copy over tarball
	I0120 16:26:11.255721 2180315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:26:13.958662 2180315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.702897487s)
	I0120 16:26:13.958696 2180315 crio.go:469] duration metric: took 2.703057594s to extract the tarball
	I0120 16:26:13.958704 2180315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:26:14.003804 2180315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:14.054640 2180315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:26:14.054677 2180315 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:26:14.054745 2180315 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.054793 2180315 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.054813 2180315 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 16:26:14.054842 2180315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.054874 2180315 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.054843 2180315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.054852 2180315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.054796 2180315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.056197 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.056442 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.056458 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.056501 2180315 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.056442 2180315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.056446 2180315 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.056571 2180315 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 16:26:14.056504 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.255911 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.261015 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.271871 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 16:26:14.283998 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.287187 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.299760 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.329076 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.396160 2180315 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 16:26:14.396234 2180315 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.396338 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.396341 2180315 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 16:26:14.396404 2180315 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.396461 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.426212 2180315 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 16:26:14.426273 2180315 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 16:26:14.426342 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.445345 2180315 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 16:26:14.445399 2180315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.445455 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.470555 2180315 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 16:26:14.470624 2180315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.470701 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.475886 2180315 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 16:26:14.475959 2180315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.476032 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.483228 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.485259 2180315 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 16:26:14.485316 2180315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.485360 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.485370 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.485361 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.485396 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.485433 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.485496 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.485502 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.771462 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.771604 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.771626 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.771678 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.771745 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.771763 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.771811 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.948549 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.948568 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.948595 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.948653 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.948759 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.948773 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.948838 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:15.113088 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:15.113170 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 16:26:15.113333 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 16:26:15.113423 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 16:26:15.113461 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 16:26:15.113517 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 16:26:15.113585 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 16:26:15.149469 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 16:26:15.149576 2180315 cache_images.go:92] duration metric: took 1.094879778s to LoadCachedImages
	W0120 16:26:15.149665 2180315 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0120 16:26:15.149685 2180315 kubeadm.go:934] updating node { 192.168.50.241 8443 v1.20.0 crio true true} ...
	I0120 16:26:15.149874 2180315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-806597 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:26:15.149965 2180315 ssh_runner.go:195] Run: crio config
	I0120 16:26:15.210239 2180315 cni.go:84] Creating CNI manager for ""
	I0120 16:26:15.210274 2180315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:26:15.210289 2180315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:26:15.210311 2180315 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.241 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-806597 NodeName:old-k8s-version-806597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 16:26:15.210498 2180315 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-806597"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:26:15.210588 2180315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 16:26:15.222433 2180315 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:26:15.222528 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:26:15.234452 2180315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 16:26:15.253710 2180315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:26:15.274391 2180315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 16:26:15.295494 2180315 ssh_runner.go:195] Run: grep 192.168.50.241	control-plane.minikube.internal$ /etc/hosts
	I0120 16:26:15.300358 2180315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:26:15.314207 2180315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:15.445942 2180315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:26:15.467879 2180315 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597 for IP: 192.168.50.241
	I0120 16:26:15.467911 2180315 certs.go:194] generating shared ca certs ...
	I0120 16:26:15.467937 2180315 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.468170 2180315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:26:15.468250 2180315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:26:15.468266 2180315 certs.go:256] generating profile certs ...
	I0120 16:26:15.468364 2180315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key
	I0120 16:26:15.468390 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt with IP's: []
	I0120 16:26:15.577472 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt ...
	I0120 16:26:15.577509 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt: {Name:mk914869e99403fc00f1cc4cad2ac1e0f3ec5551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.577754 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key ...
	I0120 16:26:15.577783 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key: {Name:mkb92bb614ad1cca6b0bdf061440a9ad4a00c5e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.577908 2180315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1
	I0120 16:26:15.577927 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.241]
	I0120 16:26:15.668661 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 ...
	I0120 16:26:15.668711 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1: {Name:mk8ca43e254a5404c4e4ca93c5c33b7ec4ae25d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.668957 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1 ...
	I0120 16:26:15.668994 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1: {Name:mkd7928d3b6f1cd61571f995c138a3935139db8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.669164 2180315 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt
	I0120 16:26:15.669301 2180315 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key
	I0120 16:26:15.669399 2180315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key
	I0120 16:26:15.669437 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt with IP's: []
	I0120 16:26:15.967203 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt ...
	I0120 16:26:15.967244 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt: {Name:mk9c2857e3082a01d8d3c5bec5ce892ccc2ad7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.967440 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key ...
	I0120 16:26:15.967455 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key: {Name:mk4c49d697564a24897bf19dd29d9182642aa2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.967625 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:26:15.967664 2180315 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:26:15.967676 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:26:15.967699 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:26:15.967723 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:26:15.967744 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:26:15.967781 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:15.968409 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:26:15.996020 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:26:16.024650 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:26:16.052287 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:26:16.080693 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 16:26:16.108290 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:26:16.133710 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:26:16.159665 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 16:26:16.187716 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:26:16.213166 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:26:16.239603 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:26:16.265627 2180315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:26:16.286526 2180315 ssh_runner.go:195] Run: openssl version
	I0120 16:26:16.293672 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:26:16.306832 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.312186 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.312254 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.319194 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:26:16.339608 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:26:16.361478 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.370929 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.371021 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.377897 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:26:16.395739 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:26:16.412706 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.419322 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.419420 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.426849 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:26:16.442098 2180315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:26:16.447038 2180315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:26:16.447118 2180315 kubeadm.go:392] StartCluster: {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:26:16.447228 2180315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:26:16.447310 2180315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:26:16.488184 2180315 cri.go:89] found id: ""
	I0120 16:26:16.488327 2180315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:26:16.499968 2180315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:26:16.510813 2180315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:26:16.521717 2180315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:26:16.521744 2180315 kubeadm.go:157] found existing configuration files:
	
	I0120 16:26:16.521802 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:26:16.532541 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:26:16.532660 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:26:16.544206 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:26:16.554880 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:26:16.554976 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:26:16.567537 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:26:16.579804 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:26:16.579878 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:26:16.592478 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:26:16.604442 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:26:16.604526 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:26:16.616832 2180315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:26:16.743059 2180315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:26:16.743124 2180315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:26:16.887386 2180315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:26:16.887552 2180315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:26:16.887686 2180315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:26:17.092403 2180315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:26:15.705773 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:15.706298 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:15.706322 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:15.706260 2181234 retry.go:31] will retry after 2.16727729s: waiting for domain to come up
	I0120 16:26:17.874845 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:17.875367 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:17.875416 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:17.875321 2181234 retry.go:31] will retry after 2.422862522s: waiting for domain to come up
	I0120 16:26:17.095152 2180315 out.go:235]   - Generating certificates and keys ...
	I0120 16:26:17.095265 2180315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:26:17.095400 2180315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:26:17.232982 2180315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:26:17.360580 2180315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:26:17.458945 2180315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:26:17.755642 2180315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:26:17.994212 2180315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:26:17.994402 2180315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	I0120 16:26:18.167059 2180315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:26:18.167305 2180315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	I0120 16:26:18.343031 2180315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:26:18.731988 2180315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:26:19.042242 2180315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:26:19.042344 2180315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:26:19.483495 2180315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:26:19.688174 2180315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:26:19.999937 2180315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:26:20.097904 2180315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:26:20.123004 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:26:20.123402 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:26:20.123476 2180315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:26:20.273327 2180315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:26:20.275346 2180315 out.go:235]   - Booting up control plane ...
	I0120 16:26:20.275493 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:26:20.282943 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:26:20.284334 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:26:20.285288 2180315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:26:20.290986 2180315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:26:20.300135 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:20.300680 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:20.300708 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:20.300652 2181234 retry.go:31] will retry after 3.280914386s: waiting for domain to come up
	I0120 16:26:23.583344 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:23.583753 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:23.583780 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:23.583743 2181234 retry.go:31] will retry after 3.37513518s: waiting for domain to come up
	I0120 16:26:26.963395 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:26.963895 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find current IP address of domain no-preload-552545 in network mk-no-preload-552545
	I0120 16:26:26.963925 2180359 main.go:141] libmachine: (no-preload-552545) DBG | I0120 16:26:26.963837 2181234 retry.go:31] will retry after 4.260934196s: waiting for domain to come up
	I0120 16:26:32.787742 2180725 start.go:364] duration metric: took 52.140783192s to acquireMachinesLock for "kubernetes-upgrade-207056"
	I0120 16:26:32.787831 2180725 start.go:96] Skipping create...Using existing machine configuration
	I0120 16:26:32.787845 2180725 fix.go:54] fixHost starting: 
	I0120 16:26:32.788291 2180725 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:26:32.788358 2180725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:26:32.805651 2180725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0120 16:26:32.806170 2180725 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:26:32.806781 2180725 main.go:141] libmachine: Using API Version  1
	I0120 16:26:32.806827 2180725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:26:32.807170 2180725 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:26:32.807360 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:32.807512 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetState
	I0120 16:26:32.809301 2180725 fix.go:112] recreateIfNeeded on kubernetes-upgrade-207056: state=Running err=<nil>
	W0120 16:26:32.809327 2180725 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 16:26:32.811457 2180725 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-207056" VM ...
	I0120 16:26:31.227820 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.228347 2180359 main.go:141] libmachine: (no-preload-552545) found domain IP: 192.168.39.131
	I0120 16:26:31.228407 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has current primary IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.228419 2180359 main.go:141] libmachine: (no-preload-552545) reserving static IP address...
	I0120 16:26:31.228912 2180359 main.go:141] libmachine: (no-preload-552545) DBG | unable to find host DHCP lease matching {name: "no-preload-552545", mac: "52:54:00:02:1b:04", ip: "192.168.39.131"} in network mk-no-preload-552545
	I0120 16:26:31.313590 2180359 main.go:141] libmachine: (no-preload-552545) reserved static IP address 192.168.39.131 for domain no-preload-552545
	I0120 16:26:31.313632 2180359 main.go:141] libmachine: (no-preload-552545) DBG | Getting to WaitForSSH function...
	I0120 16:26:31.313641 2180359 main.go:141] libmachine: (no-preload-552545) waiting for SSH...
	I0120 16:26:31.316601 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.317017 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.317045 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.317192 2180359 main.go:141] libmachine: (no-preload-552545) DBG | Using SSH client type: external
	I0120 16:26:31.317247 2180359 main.go:141] libmachine: (no-preload-552545) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa (-rw-------)
	I0120 16:26:31.317293 2180359 main.go:141] libmachine: (no-preload-552545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:26:31.317327 2180359 main.go:141] libmachine: (no-preload-552545) DBG | About to run SSH command:
	I0120 16:26:31.317341 2180359 main.go:141] libmachine: (no-preload-552545) DBG | exit 0
	I0120 16:26:31.450801 2180359 main.go:141] libmachine: (no-preload-552545) DBG | SSH cmd err, output: <nil>: 
	I0120 16:26:31.451119 2180359 main.go:141] libmachine: (no-preload-552545) KVM machine creation complete
	I0120 16:26:31.451474 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetConfigRaw
	I0120 16:26:31.452079 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:31.452279 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:31.452454 2180359 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:26:31.452471 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetState
	I0120 16:26:31.453883 2180359 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:26:31.453896 2180359 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:26:31.453902 2180359 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:26:31.453907 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:31.456872 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.457294 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.457339 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.457472 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:31.457640 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.457795 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.457903 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:31.458088 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:31.458323 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:31.458335 2180359 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:26:31.570336 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:31.570362 2180359 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:26:31.570371 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:31.573852 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.574257 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.574285 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.574449 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:31.574663 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.574870 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.575011 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:31.575194 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:31.575381 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:31.575392 2180359 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:26:31.692094 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:26:31.692214 2180359 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:26:31.692226 2180359 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:26:31.692235 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetMachineName
	I0120 16:26:31.692537 2180359 buildroot.go:166] provisioning hostname "no-preload-552545"
	I0120 16:26:31.692571 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetMachineName
	I0120 16:26:31.692800 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:31.695776 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.696186 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.696209 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.696464 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:31.696678 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.696851 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.697028 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:31.697194 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:31.697407 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:31.697424 2180359 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-552545 && echo "no-preload-552545" | sudo tee /etc/hostname
	I0120 16:26:31.826970 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-552545
	
	I0120 16:26:31.827010 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:31.830373 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.830788 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.830819 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.831069 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:31.831274 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.831477 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:31.831650 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:31.831851 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:31.832048 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:31.832067 2180359 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-552545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-552545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-552545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:26:31.956560 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:31.956601 2180359 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:26:31.956656 2180359 buildroot.go:174] setting up certificates
	I0120 16:26:31.956675 2180359 provision.go:84] configureAuth start
	I0120 16:26:31.956698 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetMachineName
	I0120 16:26:31.957005 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetIP
	I0120 16:26:31.960322 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.960759 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.960793 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.960931 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:31.963345 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.963712 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:31.963742 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:31.963927 2180359 provision.go:143] copyHostCerts
	I0120 16:26:31.963994 2180359 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:26:31.964007 2180359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:26:31.964079 2180359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:26:31.964204 2180359 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:26:31.964216 2180359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:26:31.964238 2180359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:26:31.964303 2180359 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:26:31.964310 2180359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:26:31.964327 2180359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:26:31.964389 2180359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.no-preload-552545 san=[127.0.0.1 192.168.39.131 localhost minikube no-preload-552545]
	I0120 16:26:32.100260 2180359 provision.go:177] copyRemoteCerts
	I0120 16:26:32.100330 2180359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:26:32.100359 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.103422 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.103703 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.103730 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.104001 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.104232 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.104395 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.104563 2180359 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa Username:docker}
	I0120 16:26:32.193731 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:26:32.220278 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 16:26:32.251111 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:26:32.280225 2180359 provision.go:87] duration metric: took 323.528858ms to configureAuth
	I0120 16:26:32.280259 2180359 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:26:32.280487 2180359 config.go:182] Loaded profile config "no-preload-552545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:26:32.280582 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.283450 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.283810 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.283842 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.283993 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.284200 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.284363 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.284541 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.284759 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:32.284941 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:32.284955 2180359 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:26:32.526559 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:26:32.526702 2180359 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:26:32.526728 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetURL
	I0120 16:26:32.528245 2180359 main.go:141] libmachine: (no-preload-552545) DBG | using libvirt version 6000000
	I0120 16:26:32.530996 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.531415 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.531447 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.531636 2180359 main.go:141] libmachine: Docker is up and running!
	I0120 16:26:32.531656 2180359 main.go:141] libmachine: Reticulating splines...
	I0120 16:26:32.531664 2180359 client.go:171] duration metric: took 24.736590348s to LocalClient.Create
	I0120 16:26:32.531690 2180359 start.go:167] duration metric: took 24.736666902s to libmachine.API.Create "no-preload-552545"
	I0120 16:26:32.531704 2180359 start.go:293] postStartSetup for "no-preload-552545" (driver="kvm2")
	I0120 16:26:32.531717 2180359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:26:32.531737 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:32.531999 2180359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:26:32.532043 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.534400 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.534802 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.534835 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.535028 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.535218 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.535417 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.535554 2180359 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa Username:docker}
	I0120 16:26:32.621844 2180359 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:26:32.626575 2180359 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:26:32.626618 2180359 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:26:32.626685 2180359 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:26:32.626756 2180359 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:26:32.626845 2180359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:26:32.636858 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:32.662869 2180359 start.go:296] duration metric: took 131.145558ms for postStartSetup
	I0120 16:26:32.662937 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetConfigRaw
	I0120 16:26:32.663766 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetIP
	I0120 16:26:32.666743 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.667151 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.667186 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.667403 2180359 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/config.json ...
	I0120 16:26:32.667634 2180359 start.go:128] duration metric: took 24.89551146s to createHost
	I0120 16:26:32.667664 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.669959 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.670359 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.670396 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.670589 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.670821 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.670969 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.671126 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.671294 2180359 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:32.671462 2180359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0120 16:26:32.671473 2180359 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:26:32.787574 2180359 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390392.757773704
	
	I0120 16:26:32.787606 2180359 fix.go:216] guest clock: 1737390392.757773704
	I0120 16:26:32.787612 2180359 fix.go:229] Guest: 2025-01-20 16:26:32.757773704 +0000 UTC Remote: 2025-01-20 16:26:32.667651863 +0000 UTC m=+82.452193144 (delta=90.121841ms)
	I0120 16:26:32.787634 2180359 fix.go:200] guest clock delta is within tolerance: 90.121841ms
	I0120 16:26:32.787639 2180359 start.go:83] releasing machines lock for "no-preload-552545", held for 25.015741626s
	I0120 16:26:32.787670 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:32.787984 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetIP
	I0120 16:26:32.791258 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.791742 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.791770 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.791924 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:32.792506 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:32.792685 2180359 main.go:141] libmachine: (no-preload-552545) Calling .DriverName
	I0120 16:26:32.792777 2180359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:26:32.792839 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.792897 2180359 ssh_runner.go:195] Run: cat /version.json
	I0120 16:26:32.792933 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHHostname
	I0120 16:26:32.796072 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.796105 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.796476 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.796513 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:32.796537 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.796553 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:32.796697 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.796808 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHPort
	I0120 16:26:32.796899 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.796968 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHKeyPath
	I0120 16:26:32.797034 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.797098 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetSSHUsername
	I0120 16:26:32.797190 2180359 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa Username:docker}
	I0120 16:26:32.797348 2180359 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/no-preload-552545/id_rsa Username:docker}
	I0120 16:26:32.880017 2180359 ssh_runner.go:195] Run: systemctl --version
	I0120 16:26:32.908268 2180359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:26:33.073624 2180359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:26:33.080924 2180359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:26:33.081036 2180359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:26:33.098655 2180359 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:26:33.098691 2180359 start.go:495] detecting cgroup driver to use...
	I0120 16:26:33.098783 2180359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:26:33.117525 2180359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:26:33.132773 2180359 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:26:33.132861 2180359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:26:33.149974 2180359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:26:33.167800 2180359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:26:33.282810 2180359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:26:33.435894 2180359 docker.go:233] disabling docker service ...
	I0120 16:26:33.435977 2180359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:26:33.452545 2180359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:26:33.467836 2180359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:26:33.595554 2180359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:26:33.719507 2180359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:26:33.735022 2180359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:26:33.755395 2180359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:26:33.755479 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.767370 2180359 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:26:33.767463 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.780422 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.799909 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.811821 2180359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:26:33.823694 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.835764 2180359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.857063 2180359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:33.868848 2180359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:26:33.879164 2180359 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:26:33.879248 2180359 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:26:33.893165 2180359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:26:33.904358 2180359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:34.034407 2180359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:26:34.133909 2180359 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:26:34.133995 2180359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:26:34.138814 2180359 start.go:563] Will wait 60s for crictl version
	I0120 16:26:34.138890 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.142834 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:26:34.187000 2180359 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:26:34.187123 2180359 ssh_runner.go:195] Run: crio --version
	I0120 16:26:34.217721 2180359 ssh_runner.go:195] Run: crio --version
	I0120 16:26:34.252332 2180359 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:26:34.253821 2180359 main.go:141] libmachine: (no-preload-552545) Calling .GetIP
	I0120 16:26:34.256880 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:34.257274 2180359 main.go:141] libmachine: (no-preload-552545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1b:04", ip: ""} in network mk-no-preload-552545: {Iface:virbr1 ExpiryTime:2025-01-20 17:26:24 +0000 UTC Type:0 Mac:52:54:00:02:1b:04 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:no-preload-552545 Clientid:01:52:54:00:02:1b:04}
	I0120 16:26:34.257309 2180359 main.go:141] libmachine: (no-preload-552545) DBG | domain no-preload-552545 has defined IP address 192.168.39.131 and MAC address 52:54:00:02:1b:04 in network mk-no-preload-552545
	I0120 16:26:34.257505 2180359 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:26:34.261953 2180359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:26:34.275239 2180359 kubeadm.go:883] updating cluster {Name:no-preload-552545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-552545 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:26:34.275368 2180359 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:26:34.275409 2180359 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:34.311540 2180359 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:26:34.311569 2180359 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:26:34.311645 2180359 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:34.311677 2180359 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.311699 2180359 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:34.311710 2180359 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.311760 2180359 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0120 16:26:34.311769 2180359 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.311783 2180359 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.311805 2180359 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.313048 2180359 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.313051 2180359 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.313080 2180359 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.313050 2180359 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.313110 2180359 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:34.313050 2180359 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.313138 2180359 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:34.313148 2180359 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0120 16:26:34.463836 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.479348 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.483037 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.490671 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.516443 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0120 16:26:34.519639 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.530586 2180359 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I0120 16:26:34.530658 2180359 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.530717 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.553684 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:34.599786 2180359 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I0120 16:26:34.599841 2180359 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0120 16:26:34.599892 2180359 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.599943 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.599847 2180359 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.600006 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.632244 2180359 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I0120 16:26:34.632315 2180359 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.632377 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.654401 2180359 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I0120 16:26:34.654431 2180359 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0120 16:26:34.654459 2180359 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.654466 2180359 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0120 16:26:34.654491 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.654502 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.654514 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.679123 2180359 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0120 16:26:34.679176 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.679217 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.679247 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.679182 2180359 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:34.679347 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.716992 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0120 16:26:34.717008 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.717061 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.748623 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.759968 2180359 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:34.820221 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.842181 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:34.842181 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:34.895181 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 16:26:34.899839 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0120 16:26:34.899861 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:34.899962 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 16:26:34.937438 2180359 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0120 16:26:34.937492 2180359 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:34.937549 2180359 ssh_runner.go:195] Run: which crictl
	I0120 16:26:34.967844 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 16:26:34.999429 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:35.006169 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 16:26:35.069932 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I0120 16:26:35.070084 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 16:26:35.076633 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 16:26:35.076666 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0120 16:26:35.076668 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I0120 16:26:35.076718 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:35.076783 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 16:26:35.157908 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0120 16:26:35.158027 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0120 16:26:35.177658 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 16:26:35.177718 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.32.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.32.0': No such file or directory
	I0120 16:26:35.177758 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 --> /var/lib/minikube/images/kube-apiserver_v1.32.0 (28680192 bytes)
	I0120 16:26:35.177777 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:35.177789 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.32.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.32.0': No such file or directory
	I0120 16:26:35.177830 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 --> /var/lib/minikube/images/kube-scheduler_v1.32.0 (20666368 bytes)
	I0120 16:26:35.177658 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I0120 16:26:35.177968 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 16:26:32.812707 2180725 machine.go:93] provisionDockerMachine start ...
	I0120 16:26:32.812736 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:32.812985 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:32.815864 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:32.816395 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:32.816436 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:32.816612 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:32.816789 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:32.816960 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:32.817155 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:32.817291 2180725 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:32.817514 2180725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:26:32.817530 2180725 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 16:26:32.921432 2180725 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-207056
	
	I0120 16:26:32.921469 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:26:32.921728 2180725 buildroot.go:166] provisioning hostname "kubernetes-upgrade-207056"
	I0120 16:26:32.921758 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:26:32.921934 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:32.924935 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:32.925379 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:32.925430 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:32.925617 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:32.925844 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:32.926010 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:32.926148 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:32.926357 2180725 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:32.926645 2180725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:26:32.926667 2180725 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-207056 && echo "kubernetes-upgrade-207056" | sudo tee /etc/hostname
	I0120 16:26:33.047644 2180725 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-207056
	
	I0120 16:26:33.047688 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:33.050660 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.051117 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:33.051155 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.051268 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:33.051473 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:33.051616 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:33.051788 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:33.052010 2180725 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:33.052323 2180725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:26:33.052358 2180725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-207056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-207056/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-207056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:26:33.156435 2180725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:33.156488 2180725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:26:33.156524 2180725 buildroot.go:174] setting up certificates
	I0120 16:26:33.156542 2180725 provision.go:84] configureAuth start
	I0120 16:26:33.156558 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetMachineName
	I0120 16:26:33.156835 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:26:33.160083 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.160471 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:33.160504 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.160655 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:33.163241 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.163602 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:33.163644 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.163881 2180725 provision.go:143] copyHostCerts
	I0120 16:26:33.163956 2180725 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:26:33.163984 2180725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:26:33.164078 2180725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:26:33.164199 2180725 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:26:33.164209 2180725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:26:33.164234 2180725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:26:33.164308 2180725 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:26:33.164317 2180725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:26:33.164344 2180725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:26:33.164420 2180725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-207056 san=[127.0.0.1 192.168.72.209 kubernetes-upgrade-207056 localhost minikube]
	I0120 16:26:33.442823 2180725 provision.go:177] copyRemoteCerts
	I0120 16:26:33.442888 2180725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:26:33.442918 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:33.445917 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.446315 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:33.446366 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.446532 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:33.446800 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:33.446985 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:33.447115 2180725 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:26:33.535392 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:26:33.563574 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 16:26:33.592730 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:26:33.625014 2180725 provision.go:87] duration metric: took 468.451335ms to configureAuth
	I0120 16:26:33.625072 2180725 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:26:33.625851 2180725 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:26:33.626051 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:33.629988 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.630468 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:33.630504 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:33.630697 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:33.630922 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:33.631132 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:33.631277 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:33.631498 2180725 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:33.631729 2180725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:26:33.631756 2180725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:26:39.891931 2181167 start.go:364] duration metric: took 36.002733042s to acquireMachinesLock for "embed-certs-429406"
	I0120 16:26:39.892007 2181167 start.go:93] Provisioning new machine with config: &{Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-42940
6 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:26:39.892160 2181167 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:26:35.268037 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I0120 16:26:35.268117 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0120 16:26:35.268183 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 16:26:35.268203 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0120 16:26:35.268048 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0120 16:26:35.268413 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0120 16:26:35.284040 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0120 16:26:35.284160 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0120 16:26:35.299652 2180359 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:35.299671 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.32.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.32.0': No such file or directory
	I0120 16:26:35.299702 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 --> /var/lib/minikube/images/kube-controller-manager_v1.32.0 (26265088 bytes)
	I0120 16:26:35.476788 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0120 16:26:35.476839 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0120 16:26:35.476876 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.16-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.16-0': No such file or directory
	I0120 16:26:35.476899 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.32.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.32.0': No such file or directory
	I0120 16:26:35.476917 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 --> /var/lib/minikube/images/kube-proxy_v1.32.0 (30908928 bytes)
	I0120 16:26:35.476900 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 --> /var/lib/minikube/images/etcd_3.5.16-0 (57690112 bytes)
	I0120 16:26:35.477941 2180359 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0120 16:26:35.478077 2180359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0120 16:26:35.568098 2180359 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0120 16:26:35.568169 2180359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0120 16:26:35.589462 2180359 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0120 16:26:35.589576 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0120 16:26:36.344709 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0120 16:26:36.344787 2180359 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0120 16:26:36.344855 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0120 16:26:37.298705 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0120 16:26:37.298773 2180359 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 16:26:37.298837 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 16:26:39.382941 2180359 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.084069307s)
	I0120 16:26:39.382979 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I0120 16:26:39.383009 2180359 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0120 16:26:39.383083 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0120 16:26:39.652077 2180725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:26:39.652112 2180725 machine.go:96] duration metric: took 6.839384084s to provisionDockerMachine
	I0120 16:26:39.652127 2180725 start.go:293] postStartSetup for "kubernetes-upgrade-207056" (driver="kvm2")
	I0120 16:26:39.652140 2180725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:26:39.652164 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:39.652619 2180725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:26:39.652663 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:39.655529 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.655891 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:39.655924 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.656131 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:39.656368 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:39.656527 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:39.656683 2180725 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:26:39.737217 2180725 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:26:39.741933 2180725 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:26:39.741961 2180725 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:26:39.742047 2180725 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:26:39.742168 2180725 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:26:39.742293 2180725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:26:39.752163 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:39.776570 2180725 start.go:296] duration metric: took 124.42448ms for postStartSetup
	I0120 16:26:39.776625 2180725 fix.go:56] duration metric: took 6.988780203s for fixHost
	I0120 16:26:39.776651 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:39.779790 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.780139 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:39.780169 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.780463 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:39.780703 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:39.780884 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:39.781042 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:39.781195 2180725 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:39.781361 2180725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.209 22 <nil> <nil>}
	I0120 16:26:39.781372 2180725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:26:39.891760 2180725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390399.882747835
	
	I0120 16:26:39.891793 2180725 fix.go:216] guest clock: 1737390399.882747835
	I0120 16:26:39.891804 2180725 fix.go:229] Guest: 2025-01-20 16:26:39.882747835 +0000 UTC Remote: 2025-01-20 16:26:39.776630973 +0000 UTC m=+59.284428177 (delta=106.116862ms)
	I0120 16:26:39.891832 2180725 fix.go:200] guest clock delta is within tolerance: 106.116862ms
	I0120 16:26:39.891839 2180725 start.go:83] releasing machines lock for "kubernetes-upgrade-207056", held for 7.104051703s
	I0120 16:26:39.891870 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:39.892192 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:26:39.895474 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.895845 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:39.895881 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.896000 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:39.896609 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:39.896834 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .DriverName
	I0120 16:26:39.896937 2180725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:26:39.897013 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:39.897130 2180725 ssh_runner.go:195] Run: cat /version.json
	I0120 16:26:39.897164 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHHostname
	I0120 16:26:39.900012 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.900285 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.900426 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:39.900453 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.900604 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:39.900703 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:39.900726 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:39.900809 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:39.900985 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHPort
	I0120 16:26:39.900998 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:39.901173 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHKeyPath
	I0120 16:26:39.901195 2180725 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:26:39.901301 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetSSHUsername
	I0120 16:26:39.901442 2180725 sshutil.go:53] new ssh client: &{IP:192.168.72.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/kubernetes-upgrade-207056/id_rsa Username:docker}
	I0120 16:26:39.984797 2180725 ssh_runner.go:195] Run: systemctl --version
	I0120 16:26:40.015335 2180725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:26:40.192684 2180725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:26:40.201410 2180725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:26:40.201488 2180725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:26:40.216692 2180725 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 16:26:40.216729 2180725 start.go:495] detecting cgroup driver to use...
	I0120 16:26:40.216804 2180725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:26:40.244201 2180725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:26:40.260947 2180725 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:26:40.261020 2180725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:26:40.278364 2180725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:26:40.295276 2180725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:26:40.470274 2180725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:26:39.893960 2181167 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 16:26:39.894148 2181167 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:26:39.894201 2181167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:26:39.912110 2181167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0120 16:26:39.912588 2181167 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:26:39.913219 2181167 main.go:141] libmachine: Using API Version  1
	I0120 16:26:39.913246 2181167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:26:39.913634 2181167 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:26:39.913829 2181167 main.go:141] libmachine: (embed-certs-429406) Calling .GetMachineName
	I0120 16:26:39.913974 2181167 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:26:39.914116 2181167 start.go:159] libmachine.API.Create for "embed-certs-429406" (driver="kvm2")
	I0120 16:26:39.914171 2181167 client.go:168] LocalClient.Create starting
	I0120 16:26:39.914209 2181167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:26:39.914247 2181167 main.go:141] libmachine: Decoding PEM data...
	I0120 16:26:39.914267 2181167 main.go:141] libmachine: Parsing certificate...
	I0120 16:26:39.914340 2181167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:26:39.914369 2181167 main.go:141] libmachine: Decoding PEM data...
	I0120 16:26:39.914395 2181167 main.go:141] libmachine: Parsing certificate...
	I0120 16:26:39.914420 2181167 main.go:141] libmachine: Running pre-create checks...
	I0120 16:26:39.914434 2181167 main.go:141] libmachine: (embed-certs-429406) Calling .PreCreateCheck
	I0120 16:26:39.914866 2181167 main.go:141] libmachine: (embed-certs-429406) Calling .GetConfigRaw
	I0120 16:26:39.915311 2181167 main.go:141] libmachine: Creating machine...
	I0120 16:26:39.915326 2181167 main.go:141] libmachine: (embed-certs-429406) Calling .Create
	I0120 16:26:39.915451 2181167 main.go:141] libmachine: (embed-certs-429406) creating KVM machine...
	I0120 16:26:39.915470 2181167 main.go:141] libmachine: (embed-certs-429406) creating network...
	I0120 16:26:39.916875 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | found existing default KVM network
	I0120 16:26:39.918689 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:39.918489 2181530 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:14:91} reservation:<nil>}
	I0120 16:26:39.919928 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:39.919828 2181530 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:0e:01} reservation:<nil>}
	I0120 16:26:39.921385 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:39.921284 2181530 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003192c0}
	I0120 16:26:39.921422 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | created network xml: 
	I0120 16:26:39.921433 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | <network>
	I0120 16:26:39.921441 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   <name>mk-embed-certs-429406</name>
	I0120 16:26:39.921450 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   <dns enable='no'/>
	I0120 16:26:39.921469 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   
	I0120 16:26:39.921480 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 16:26:39.921494 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |     <dhcp>
	I0120 16:26:39.921508 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 16:26:39.921515 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |     </dhcp>
	I0120 16:26:39.921523 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   </ip>
	I0120 16:26:39.921533 2181167 main.go:141] libmachine: (embed-certs-429406) DBG |   
	I0120 16:26:39.921542 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | </network>
	I0120 16:26:39.921563 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | 
	I0120 16:26:39.926624 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | trying to create private KVM network mk-embed-certs-429406 192.168.61.0/24...
	I0120 16:26:40.007196 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | private KVM network mk-embed-certs-429406 192.168.61.0/24 created
	I0120 16:26:40.007226 2181167 main.go:141] libmachine: (embed-certs-429406) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406 ...
	I0120 16:26:40.007239 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:40.007155 2181530 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:26:40.007347 2181167 main.go:141] libmachine: (embed-certs-429406) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:26:40.007388 2181167 main.go:141] libmachine: (embed-certs-429406) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:26:40.276950 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:40.276811 2181530 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa...
	I0120 16:26:40.355638 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:40.355480 2181530 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/embed-certs-429406.rawdisk...
	I0120 16:26:40.355672 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | Writing magic tar header
	I0120 16:26:40.355690 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | Writing SSH key tar header
	I0120 16:26:40.355703 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:40.355626 2181530 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406 ...
	I0120 16:26:40.355724 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406
	I0120 16:26:40.355832 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:26:40.355859 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:26:40.355872 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:26:40.355881 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:26:40.355893 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home/jenkins
	I0120 16:26:40.355901 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | checking permissions on dir: /home
	I0120 16:26:40.355911 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | skipping /home - not owner
	I0120 16:26:40.355928 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406 (perms=drwx------)
	I0120 16:26:40.355944 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:26:40.355956 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:26:40.355965 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:26:40.355977 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:26:40.355985 2181167 main.go:141] libmachine: (embed-certs-429406) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:26:40.355994 2181167 main.go:141] libmachine: (embed-certs-429406) creating domain...
	I0120 16:26:40.357320 2181167 main.go:141] libmachine: (embed-certs-429406) define libvirt domain using xml: 
	I0120 16:26:40.357347 2181167 main.go:141] libmachine: (embed-certs-429406) <domain type='kvm'>
	I0120 16:26:40.357358 2181167 main.go:141] libmachine: (embed-certs-429406)   <name>embed-certs-429406</name>
	I0120 16:26:40.357365 2181167 main.go:141] libmachine: (embed-certs-429406)   <memory unit='MiB'>2200</memory>
	I0120 16:26:40.357374 2181167 main.go:141] libmachine: (embed-certs-429406)   <vcpu>2</vcpu>
	I0120 16:26:40.357396 2181167 main.go:141] libmachine: (embed-certs-429406)   <features>
	I0120 16:26:40.357410 2181167 main.go:141] libmachine: (embed-certs-429406)     <acpi/>
	I0120 16:26:40.357420 2181167 main.go:141] libmachine: (embed-certs-429406)     <apic/>
	I0120 16:26:40.357431 2181167 main.go:141] libmachine: (embed-certs-429406)     <pae/>
	I0120 16:26:40.357441 2181167 main.go:141] libmachine: (embed-certs-429406)     
	I0120 16:26:40.357452 2181167 main.go:141] libmachine: (embed-certs-429406)   </features>
	I0120 16:26:40.357463 2181167 main.go:141] libmachine: (embed-certs-429406)   <cpu mode='host-passthrough'>
	I0120 16:26:40.357474 2181167 main.go:141] libmachine: (embed-certs-429406)   
	I0120 16:26:40.357480 2181167 main.go:141] libmachine: (embed-certs-429406)   </cpu>
	I0120 16:26:40.357491 2181167 main.go:141] libmachine: (embed-certs-429406)   <os>
	I0120 16:26:40.357503 2181167 main.go:141] libmachine: (embed-certs-429406)     <type>hvm</type>
	I0120 16:26:40.357515 2181167 main.go:141] libmachine: (embed-certs-429406)     <boot dev='cdrom'/>
	I0120 16:26:40.357525 2181167 main.go:141] libmachine: (embed-certs-429406)     <boot dev='hd'/>
	I0120 16:26:40.357538 2181167 main.go:141] libmachine: (embed-certs-429406)     <bootmenu enable='no'/>
	I0120 16:26:40.357548 2181167 main.go:141] libmachine: (embed-certs-429406)   </os>
	I0120 16:26:40.357560 2181167 main.go:141] libmachine: (embed-certs-429406)   <devices>
	I0120 16:26:40.357573 2181167 main.go:141] libmachine: (embed-certs-429406)     <disk type='file' device='cdrom'>
	I0120 16:26:40.357594 2181167 main.go:141] libmachine: (embed-certs-429406)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/boot2docker.iso'/>
	I0120 16:26:40.357606 2181167 main.go:141] libmachine: (embed-certs-429406)       <target dev='hdc' bus='scsi'/>
	I0120 16:26:40.357615 2181167 main.go:141] libmachine: (embed-certs-429406)       <readonly/>
	I0120 16:26:40.357624 2181167 main.go:141] libmachine: (embed-certs-429406)     </disk>
	I0120 16:26:40.357634 2181167 main.go:141] libmachine: (embed-certs-429406)     <disk type='file' device='disk'>
	I0120 16:26:40.357647 2181167 main.go:141] libmachine: (embed-certs-429406)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:26:40.357664 2181167 main.go:141] libmachine: (embed-certs-429406)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/embed-certs-429406.rawdisk'/>
	I0120 16:26:40.357676 2181167 main.go:141] libmachine: (embed-certs-429406)       <target dev='hda' bus='virtio'/>
	I0120 16:26:40.357688 2181167 main.go:141] libmachine: (embed-certs-429406)     </disk>
	I0120 16:26:40.357699 2181167 main.go:141] libmachine: (embed-certs-429406)     <interface type='network'>
	I0120 16:26:40.357711 2181167 main.go:141] libmachine: (embed-certs-429406)       <source network='mk-embed-certs-429406'/>
	I0120 16:26:40.357723 2181167 main.go:141] libmachine: (embed-certs-429406)       <model type='virtio'/>
	I0120 16:26:40.357736 2181167 main.go:141] libmachine: (embed-certs-429406)     </interface>
	I0120 16:26:40.357746 2181167 main.go:141] libmachine: (embed-certs-429406)     <interface type='network'>
	I0120 16:26:40.357756 2181167 main.go:141] libmachine: (embed-certs-429406)       <source network='default'/>
	I0120 16:26:40.357767 2181167 main.go:141] libmachine: (embed-certs-429406)       <model type='virtio'/>
	I0120 16:26:40.357782 2181167 main.go:141] libmachine: (embed-certs-429406)     </interface>
	I0120 16:26:40.357794 2181167 main.go:141] libmachine: (embed-certs-429406)     <serial type='pty'>
	I0120 16:26:40.357807 2181167 main.go:141] libmachine: (embed-certs-429406)       <target port='0'/>
	I0120 16:26:40.357816 2181167 main.go:141] libmachine: (embed-certs-429406)     </serial>
	I0120 16:26:40.357825 2181167 main.go:141] libmachine: (embed-certs-429406)     <console type='pty'>
	I0120 16:26:40.357837 2181167 main.go:141] libmachine: (embed-certs-429406)       <target type='serial' port='0'/>
	I0120 16:26:40.357849 2181167 main.go:141] libmachine: (embed-certs-429406)     </console>
	I0120 16:26:40.357856 2181167 main.go:141] libmachine: (embed-certs-429406)     <rng model='virtio'>
	I0120 16:26:40.357869 2181167 main.go:141] libmachine: (embed-certs-429406)       <backend model='random'>/dev/random</backend>
	I0120 16:26:40.357879 2181167 main.go:141] libmachine: (embed-certs-429406)     </rng>
	I0120 16:26:40.357887 2181167 main.go:141] libmachine: (embed-certs-429406)     
	I0120 16:26:40.357897 2181167 main.go:141] libmachine: (embed-certs-429406)     
	I0120 16:26:40.357905 2181167 main.go:141] libmachine: (embed-certs-429406)   </devices>
	I0120 16:26:40.357915 2181167 main.go:141] libmachine: (embed-certs-429406) </domain>
	I0120 16:26:40.357929 2181167 main.go:141] libmachine: (embed-certs-429406) 
	I0120 16:26:40.366740 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:e5:90:c1 in network default
	I0120 16:26:40.367490 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:40.367522 2181167 main.go:141] libmachine: (embed-certs-429406) starting domain...
	I0120 16:26:40.367530 2181167 main.go:141] libmachine: (embed-certs-429406) ensuring networks are active...
	I0120 16:26:40.368415 2181167 main.go:141] libmachine: (embed-certs-429406) Ensuring network default is active
	I0120 16:26:40.368729 2181167 main.go:141] libmachine: (embed-certs-429406) Ensuring network mk-embed-certs-429406 is active
	I0120 16:26:40.369363 2181167 main.go:141] libmachine: (embed-certs-429406) getting domain XML...
	I0120 16:26:40.370111 2181167 main.go:141] libmachine: (embed-certs-429406) creating domain...
	I0120 16:26:41.695933 2181167 main.go:141] libmachine: (embed-certs-429406) waiting for IP...
	I0120 16:26:41.696730 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:41.697116 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:41.697184 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:41.697114 2181530 retry.go:31] will retry after 278.564448ms: waiting for domain to come up
	I0120 16:26:41.978747 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:41.979386 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:41.979417 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:41.979343 2181530 retry.go:31] will retry after 292.895656ms: waiting for domain to come up
	I0120 16:26:42.273733 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:42.274246 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:42.274279 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:42.274195 2181530 retry.go:31] will retry after 320.214763ms: waiting for domain to come up
	I0120 16:26:42.595714 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:42.596269 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:42.596302 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:42.596198 2181530 retry.go:31] will retry after 520.24746ms: waiting for domain to come up
	I0120 16:26:43.118203 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:43.118761 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:43.118794 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:43.118708 2181530 retry.go:31] will retry after 665.291547ms: waiting for domain to come up
	I0120 16:26:43.785570 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:43.786155 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:43.786200 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:43.786121 2181530 retry.go:31] will retry after 776.793828ms: waiting for domain to come up
	I0120 16:26:42.871366 2180359 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (3.488226824s)
	I0120 16:26:42.871413 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0120 16:26:42.871449 2180359 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 16:26:42.871512 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 16:26:44.829391 2180359 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.95784381s)
	I0120 16:26:44.829434 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I0120 16:26:44.829477 2180359 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 16:26:44.829527 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 16:26:40.642306 2180725 docker.go:233] disabling docker service ...
	I0120 16:26:40.642389 2180725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:26:40.663669 2180725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:26:40.681080 2180725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:26:40.838259 2180725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:26:41.004212 2180725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:26:41.024561 2180725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:26:41.050805 2180725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:26:41.050880 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.063162 2180725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:26:41.063251 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.079056 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.095173 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.109633 2180725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:26:41.121701 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.133598 2180725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.145865 2180725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:41.157581 2180725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:26:41.167514 2180725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:26:41.180427 2180725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:41.341992 2180725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:26:46.373626 2180725 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.03158141s)
	I0120 16:26:46.373747 2180725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:26:46.373842 2180725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:26:46.379861 2180725 start.go:563] Will wait 60s for crictl version
	I0120 16:26:46.379954 2180725 ssh_runner.go:195] Run: which crictl
	I0120 16:26:46.385424 2180725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:26:46.434454 2180725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:26:46.434555 2180725 ssh_runner.go:195] Run: crio --version
	I0120 16:26:46.479467 2180725 ssh_runner.go:195] Run: crio --version
	I0120 16:26:46.669890 2180725 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:26:44.564115 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:44.564611 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:44.564642 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:44.564582 2181530 retry.go:31] will retry after 1.157278197s: waiting for domain to come up
	I0120 16:26:45.723526 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:45.723982 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:45.724015 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:45.723944 2181530 retry.go:31] will retry after 1.427846506s: waiting for domain to come up
	I0120 16:26:47.153286 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:26:47.153777 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:26:47.153808 2181167 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:26:47.153737 2181530 retry.go:31] will retry after 1.773606572s: waiting for domain to come up
	I0120 16:26:46.895849 2180359 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.066288192s)
	I0120 16:26:46.895887 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I0120 16:26:46.895921 2180359 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 16:26:46.895975 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 16:26:49.380578 2180359 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (2.484566669s)
	I0120 16:26:49.380627 2180359 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I0120 16:26:49.380658 2180359 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0120 16:26:49.380747 2180359 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0120 16:26:46.764990 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) Calling .GetIP
	I0120 16:26:46.768456 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:46.768917 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:e5:bd", ip: ""} in network mk-kubernetes-upgrade-207056: {Iface:virbr3 ExpiryTime:2025-01-20 17:25:10 +0000 UTC Type:0 Mac:52:54:00:e5:e5:bd Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:kubernetes-upgrade-207056 Clientid:01:52:54:00:e5:e5:bd}
	I0120 16:26:46.768954 2180725 main.go:141] libmachine: (kubernetes-upgrade-207056) DBG | domain kubernetes-upgrade-207056 has defined IP address 192.168.72.209 and MAC address 52:54:00:e5:e5:bd in network mk-kubernetes-upgrade-207056
	I0120 16:26:46.769349 2180725 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:26:46.774797 2180725 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-207056 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:26:46.774924 2180725 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:26:46.774995 2180725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:46.829770 2180725 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:26:46.829820 2180725 crio.go:433] Images already preloaded, skipping extraction
	I0120 16:26:46.829920 2180725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:46.869378 2180725 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:26:46.869412 2180725 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:26:46.869423 2180725 kubeadm.go:934] updating node { 192.168.72.209 8443 v1.32.0 crio true true} ...
	I0120 16:26:46.869561 2180725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-207056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-207056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:26:46.869641 2180725 ssh_runner.go:195] Run: crio config
	I0120 16:26:46.925229 2180725 cni.go:84] Creating CNI manager for ""
	I0120 16:26:46.925254 2180725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:26:46.925266 2180725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:26:46.925292 2180725 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.209 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-207056 NodeName:kubernetes-upgrade-207056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:26:46.925450 2180725 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-207056"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:26:46.925530 2180725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:26:46.937720 2180725 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:26:46.937812 2180725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:26:46.949469 2180725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0120 16:26:46.969502 2180725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:26:46.991283 2180725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0120 16:26:47.015399 2180725 ssh_runner.go:195] Run: grep 192.168.72.209	control-plane.minikube.internal$ /etc/hosts
	I0120 16:26:47.019965 2180725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:47.173474 2180725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:26:47.194256 2180725 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056 for IP: 192.168.72.209
	I0120 16:26:47.194290 2180725 certs.go:194] generating shared ca certs ...
	I0120 16:26:47.194311 2180725 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:47.194511 2180725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:26:47.194570 2180725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:26:47.194584 2180725 certs.go:256] generating profile certs ...
	I0120 16:26:47.194731 2180725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/client.key
	I0120 16:26:47.194807 2180725 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key.b35a4ac7
	I0120 16:26:47.194864 2180725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key
	I0120 16:26:47.195050 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:26:47.195114 2180725 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:26:47.195129 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:26:47.195166 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:26:47.195201 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:26:47.195236 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:26:47.195296 2180725 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:47.195979 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:26:47.224272 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:26:47.253755 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:26:47.282407 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:26:47.310417 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 16:26:47.340098 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:26:47.372359 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:26:47.400494 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kubernetes-upgrade-207056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:26:47.428768 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:26:47.457995 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:26:47.486973 2180725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:26:47.514877 2180725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:26:47.536486 2180725 ssh_runner.go:195] Run: openssl version
	I0120 16:26:47.545721 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:26:47.557417 2180725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:47.564151 2180725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:47.564236 2180725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:47.572978 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:26:47.588141 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:26:47.600338 2180725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:26:47.606726 2180725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:26:47.606804 2180725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:26:47.613648 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:26:47.623996 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:26:47.639175 2180725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:26:47.644231 2180725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:26:47.644319 2180725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:26:47.652474 2180725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:26:47.663196 2180725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:26:47.668955 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:26:47.676950 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:26:47.682993 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:26:47.689053 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:26:47.695113 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:26:47.702813 2180725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:26:47.710740 2180725 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-207056 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-207056 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:26:47.710829 2180725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:26:47.710883 2180725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:26:47.769246 2180725 cri.go:89] found id: "77a5f4de8f209519f679d6bb9b4718e1b4b4465e7ac255ef399fcf21981bd7cd"
	I0120 16:26:47.769271 2180725 cri.go:89] found id: "e29610a29bc389e512b51f48f0421228da7274b18f1c071945c780ab1ee06302"
	I0120 16:26:47.769275 2180725 cri.go:89] found id: "13b315eff5cd8376fd92efaf604cac21dc5f60fb40c1610e14a30a906a8a9838"
	I0120 16:26:47.769290 2180725 cri.go:89] found id: "b956c7fb6e487138ef74f67d8e48359bd3bdd1d9c1619e866c79756150e15c2d"
	I0120 16:26:47.769292 2180725 cri.go:89] found id: "17c5507d11ec4277210d5744bc165fc420711c13174490ab7a72864bf48222b1"
	I0120 16:26:47.769296 2180725 cri.go:89] found id: "97f53f126ea92c9307025518983914250f920e5e83159ac5cc38db2022d1f519"
	I0120 16:26:47.769300 2180725 cri.go:89] found id: "cae5a72f497d29f0601886d2aa6bb441f77eb3a735640ba4a591f148dd051566"
	I0120 16:26:47.769304 2180725 cri.go:89] found id: "2f58b44c5d00cabd68408d16e1195fe51c0cd2e13ee87b0b32ab05184c1f2835"
	I0120 16:26:47.769309 2180725 cri.go:89] found id: ""
	I0120 16:26:47.769369 2180725 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-207056 -n kubernetes-upgrade-207056
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-207056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-207056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-207056
--- FAIL: TestKubernetesUpgrade (445.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-162976 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-162976 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.480087849s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-162976] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-162976" primary control-plane node in "pause-162976" cluster
	* Updating the running kvm2 "pause-162976" VM ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-162976" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:23:59.451069 2177106 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:23:59.451395 2177106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:23:59.451409 2177106 out.go:358] Setting ErrFile to fd 2...
	I0120 16:23:59.451417 2177106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:23:59.451662 2177106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:23:59.452330 2177106 out.go:352] Setting JSON to false
	I0120 16:23:59.453688 2177106 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29185,"bootTime":1737361054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:23:59.453825 2177106 start.go:139] virtualization: kvm guest
	I0120 16:23:59.456075 2177106 out.go:177] * [pause-162976] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:23:59.457353 2177106 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:23:59.457423 2177106 notify.go:220] Checking for updates...
	I0120 16:23:59.459913 2177106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:23:59.461361 2177106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:23:59.462627 2177106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:23:59.463957 2177106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:23:59.465323 2177106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:23:59.467183 2177106 config.go:182] Loaded profile config "pause-162976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:23:59.467839 2177106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:23:59.467911 2177106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:23:59.488513 2177106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0120 16:23:59.489050 2177106 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:23:59.489781 2177106 main.go:141] libmachine: Using API Version  1
	I0120 16:23:59.489863 2177106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:23:59.490510 2177106 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:23:59.490883 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:23:59.491257 2177106 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:23:59.491763 2177106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:23:59.491820 2177106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:23:59.508633 2177106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35257
	I0120 16:23:59.509245 2177106 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:23:59.510050 2177106 main.go:141] libmachine: Using API Version  1
	I0120 16:23:59.510145 2177106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:23:59.510572 2177106 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:23:59.510862 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:23:59.549193 2177106 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 16:23:59.551035 2177106 start.go:297] selected driver: kvm2
	I0120 16:23:59.551065 2177106 start.go:901] validating driver "kvm2" against &{Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:23:59.551298 2177106 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:23:59.551811 2177106 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:23:59.551927 2177106 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:23:59.571700 2177106 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:23:59.572866 2177106 cni.go:84] Creating CNI manager for ""
	I0120 16:23:59.572957 2177106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:23:59.573048 2177106 start.go:340] cluster config:
	{Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:
false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:23:59.573249 2177106 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:23:59.574810 2177106 out.go:177] * Starting "pause-162976" primary control-plane node in "pause-162976" cluster
	I0120 16:23:59.576081 2177106 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:23:59.576131 2177106 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:23:59.576142 2177106 cache.go:56] Caching tarball of preloaded images
	I0120 16:23:59.576289 2177106 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:23:59.576300 2177106 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:23:59.576451 2177106 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/config.json ...
	I0120 16:23:59.576698 2177106 start.go:360] acquireMachinesLock for pause-162976: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:23:59.576740 2177106 start.go:364] duration metric: took 26.428µs to acquireMachinesLock for "pause-162976"
	I0120 16:23:59.576751 2177106 start.go:96] Skipping create...Using existing machine configuration
	I0120 16:23:59.576756 2177106 fix.go:54] fixHost starting: 
	I0120 16:23:59.577076 2177106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:23:59.577109 2177106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:23:59.595858 2177106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0120 16:23:59.596338 2177106 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:23:59.596905 2177106 main.go:141] libmachine: Using API Version  1
	I0120 16:23:59.596927 2177106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:23:59.597261 2177106 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:23:59.597517 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:23:59.597756 2177106 main.go:141] libmachine: (pause-162976) Calling .GetState
	I0120 16:23:59.599746 2177106 fix.go:112] recreateIfNeeded on pause-162976: state=Running err=<nil>
	W0120 16:23:59.599767 2177106 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 16:23:59.601177 2177106 out.go:177] * Updating the running kvm2 "pause-162976" VM ...
	I0120 16:23:59.602021 2177106 machine.go:93] provisionDockerMachine start ...
	I0120 16:23:59.602049 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:23:59.602318 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:23:59.606150 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.606725 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:23:59.606750 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.607068 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:23:59.607245 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.607415 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.607891 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:23:59.608128 2177106 main.go:141] libmachine: Using SSH client type: native
	I0120 16:23:59.608409 2177106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0120 16:23:59.608429 2177106 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 16:23:59.720693 2177106 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-162976
	
	I0120 16:23:59.720732 2177106 main.go:141] libmachine: (pause-162976) Calling .GetMachineName
	I0120 16:23:59.721018 2177106 buildroot.go:166] provisioning hostname "pause-162976"
	I0120 16:23:59.721068 2177106 main.go:141] libmachine: (pause-162976) Calling .GetMachineName
	I0120 16:23:59.721225 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:23:59.725003 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.725471 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:23:59.725504 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.725729 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:23:59.725944 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.726138 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.726334 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:23:59.726548 2177106 main.go:141] libmachine: Using SSH client type: native
	I0120 16:23:59.726840 2177106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0120 16:23:59.726865 2177106 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-162976 && echo "pause-162976" | sudo tee /etc/hostname
	I0120 16:23:59.856851 2177106 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-162976
	
	I0120 16:23:59.856890 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:23:59.860578 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.861003 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:23:59.861051 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.861292 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:23:59.861537 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.861806 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:23:59.861986 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:23:59.862160 2177106 main.go:141] libmachine: Using SSH client type: native
	I0120 16:23:59.862383 2177106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0120 16:23:59.862401 2177106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-162976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-162976/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-162976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:23:59.975925 2177106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:23:59.975987 2177106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:23:59.976023 2177106 buildroot.go:174] setting up certificates
	I0120 16:23:59.976039 2177106 provision.go:84] configureAuth start
	I0120 16:23:59.976057 2177106 main.go:141] libmachine: (pause-162976) Calling .GetMachineName
	I0120 16:23:59.976393 2177106 main.go:141] libmachine: (pause-162976) Calling .GetIP
	I0120 16:23:59.980125 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.980559 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:23:59.980590 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.980788 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:23:59.983739 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.984243 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:23:59.984271 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:23:59.984602 2177106 provision.go:143] copyHostCerts
	I0120 16:23:59.984676 2177106 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:23:59.984715 2177106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:23:59.984795 2177106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:23:59.984937 2177106 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:23:59.984950 2177106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:23:59.984983 2177106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:23:59.985044 2177106 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:23:59.985055 2177106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:23:59.985083 2177106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:23:59.985147 2177106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.pause-162976 san=[127.0.0.1 192.168.39.194 localhost minikube pause-162976]
	I0120 16:24:00.214601 2177106 provision.go:177] copyRemoteCerts
	I0120 16:24:00.214701 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:24:00.214739 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:00.218188 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:00.218663 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:00.218698 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:00.218891 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:00.219076 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:00.219254 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:00.219398 2177106 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/pause-162976/id_rsa Username:docker}
	I0120 16:24:00.314286 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:24:00.348595 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 16:24:00.391402 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:24:00.428311 2177106 provision.go:87] duration metric: took 452.239127ms to configureAuth
	I0120 16:24:00.428350 2177106 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:24:00.428636 2177106 config.go:182] Loaded profile config "pause-162976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:24:00.428751 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:00.432295 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:00.432760 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:00.432795 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:00.433319 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:00.433556 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:00.433743 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:00.433909 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:00.434114 2177106 main.go:141] libmachine: Using SSH client type: native
	I0120 16:24:00.434380 2177106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0120 16:24:00.434407 2177106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:24:07.908960 2177106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:24:07.909031 2177106 machine.go:96] duration metric: took 8.306962087s to provisionDockerMachine
	I0120 16:24:07.909049 2177106 start.go:293] postStartSetup for "pause-162976" (driver="kvm2")
	I0120 16:24:07.909064 2177106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:24:07.909096 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:24:07.909559 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:24:07.909605 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:07.913022 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:07.913509 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:07.913546 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:07.913768 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:07.913977 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:07.914124 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:07.914302 2177106 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/pause-162976/id_rsa Username:docker}
	I0120 16:24:08.001422 2177106 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:24:08.006481 2177106 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:24:08.006515 2177106 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:24:08.006600 2177106 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:24:08.006715 2177106 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:24:08.006842 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:24:08.019180 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:24:08.053732 2177106 start.go:296] duration metric: took 144.662123ms for postStartSetup
	I0120 16:24:08.053784 2177106 fix.go:56] duration metric: took 8.477026317s for fixHost
	I0120 16:24:08.053814 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:08.057017 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.057518 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:08.057555 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.057807 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:08.058060 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:08.058258 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:08.058457 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:08.058653 2177106 main.go:141] libmachine: Using SSH client type: native
	I0120 16:24:08.058895 2177106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0120 16:24:08.058912 2177106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:24:08.168237 2177106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390248.155107162
	
	I0120 16:24:08.168274 2177106 fix.go:216] guest clock: 1737390248.155107162
	I0120 16:24:08.168284 2177106 fix.go:229] Guest: 2025-01-20 16:24:08.155107162 +0000 UTC Remote: 2025-01-20 16:24:08.053790134 +0000 UTC m=+8.654900900 (delta=101.317028ms)
	I0120 16:24:08.168349 2177106 fix.go:200] guest clock delta is within tolerance: 101.317028ms
	I0120 16:24:08.168356 2177106 start.go:83] releasing machines lock for "pause-162976", held for 8.591609748s
	I0120 16:24:08.168389 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:24:08.168767 2177106 main.go:141] libmachine: (pause-162976) Calling .GetIP
	I0120 16:24:08.171832 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.172220 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:08.172245 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.172438 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:24:08.172969 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:24:08.173167 2177106 main.go:141] libmachine: (pause-162976) Calling .DriverName
	I0120 16:24:08.173276 2177106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:24:08.173341 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:08.173426 2177106 ssh_runner.go:195] Run: cat /version.json
	I0120 16:24:08.173458 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHHostname
	I0120 16:24:08.176231 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.176500 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.176696 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:08.176727 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.176849 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:08.177036 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:08.177196 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:08.177210 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:08.177221 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:08.177390 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHPort
	I0120 16:24:08.177406 2177106 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/pause-162976/id_rsa Username:docker}
	I0120 16:24:08.177544 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHKeyPath
	I0120 16:24:08.177704 2177106 main.go:141] libmachine: (pause-162976) Calling .GetSSHUsername
	I0120 16:24:08.177841 2177106 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/pause-162976/id_rsa Username:docker}
	I0120 16:24:08.280824 2177106 ssh_runner.go:195] Run: systemctl --version
	I0120 16:24:08.289021 2177106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:24:08.448188 2177106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:24:08.454922 2177106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:24:08.455001 2177106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:24:08.465945 2177106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 16:24:08.465989 2177106 start.go:495] detecting cgroup driver to use...
	I0120 16:24:08.466080 2177106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:24:08.486122 2177106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:24:08.503129 2177106 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:24:08.503189 2177106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:24:08.520921 2177106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:24:08.541982 2177106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:24:08.683755 2177106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:24:08.869934 2177106 docker.go:233] disabling docker service ...
	I0120 16:24:08.870016 2177106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:24:08.984270 2177106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:24:09.040466 2177106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:24:09.293126 2177106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:24:09.605665 2177106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:24:09.642066 2177106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:24:09.750001 2177106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:24:09.750087 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:09.841682 2177106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:24:09.841761 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:09.906958 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:10.051289 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:10.091755 2177106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:24:10.137752 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:10.176092 2177106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:10.205313 2177106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:24:10.245209 2177106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:24:10.292073 2177106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:24:10.330636 2177106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:24:10.640945 2177106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:24:11.506688 2177106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:24:11.506781 2177106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:24:11.515255 2177106 start.go:563] Will wait 60s for crictl version
	I0120 16:24:11.515366 2177106 ssh_runner.go:195] Run: which crictl
	I0120 16:24:11.520344 2177106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:24:11.567208 2177106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:24:11.567312 2177106 ssh_runner.go:195] Run: crio --version
	I0120 16:24:11.607089 2177106 ssh_runner.go:195] Run: crio --version
	I0120 16:24:11.649432 2177106 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:24:11.650836 2177106 main.go:141] libmachine: (pause-162976) Calling .GetIP
	I0120 16:24:11.654422 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.654902 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:11.654938 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.655213 2177106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:24:11.661338 2177106 kubeadm.go:883] updating cluster {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:24:11.661462 2177106 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:24:11.661517 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.718025 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.718058 2177106 crio.go:433] Images already preloaded, skipping extraction
	I0120 16:24:11.718135 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.763428 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.763455 2177106 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:24:11.763463 2177106 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.32.0 crio true true} ...
	I0120 16:24:11.763590 2177106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-162976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:24:11.763689 2177106 ssh_runner.go:195] Run: crio config
	I0120 16:24:11.819158 2177106 cni.go:84] Creating CNI manager for ""
	I0120 16:24:11.819189 2177106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:24:11.819202 2177106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:24:11.819235 2177106 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-162976 NodeName:pause-162976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:24:11.819394 2177106 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-162976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.194"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:24:11.819457 2177106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:24:11.835348 2177106 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:24:11.835442 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:24:11.847825 2177106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:24:11.870432 2177106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:24:11.891465 2177106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0120 16:24:11.913231 2177106 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0120 16:24:11.918227 2177106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:24:12.115056 2177106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:24:12.198015 2177106 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976 for IP: 192.168.39.194
	I0120 16:24:12.198054 2177106 certs.go:194] generating shared ca certs ...
	I0120 16:24:12.198078 2177106 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:24:12.198273 2177106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:24:12.198331 2177106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:24:12.198340 2177106 certs.go:256] generating profile certs ...
	I0120 16:24:12.198524 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/client.key
	I0120 16:24:12.198647 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key.687d8b1d
	I0120 16:24:12.198706 2177106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key
	I0120 16:24:12.198864 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:24:12.198913 2177106 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:24:12.198927 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:24:12.198962 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:24:12.198997 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:24:12.199034 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:24:12.199103 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:24:12.199954 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:24:12.393832 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:24:12.488669 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:24:12.583366 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:24:12.684112 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:24:12.752345 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:24:12.791081 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:24:12.827517 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:24:12.875400 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:24:12.905631 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:24:12.956885 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:24:13.006982 2177106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:24:13.042182 2177106 ssh_runner.go:195] Run: openssl version
	I0120 16:24:13.055630 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:24:13.074284 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083081 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083159 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.098157 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:24:13.135167 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:24:13.148529 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154016 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154100 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.160345 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:24:13.170801 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:24:13.186416 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191443 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191519 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.197889 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:24:13.208516 2177106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:24:13.214662 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:24:13.232561 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:24:13.271852 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:24:13.285805 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:24:13.292569 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:24:13.299170 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:24:13.306424 2177106 kubeadm.go:392] StartCluster: {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:24:13.306556 2177106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:24:13.306653 2177106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:24:13.356292 2177106 cri.go:89] found id: "a42038c434d98f1152a5e927fe85b43f9c172aac433828277d7519d5c49ca63a"
	I0120 16:24:13.356334 2177106 cri.go:89] found id: "0fecd72a3609448ae6bbc13fe71867e16380e8c015580111aa27e8172d29d73c"
	I0120 16:24:13.356341 2177106 cri.go:89] found id: "69256c56044f8aa68b61d19abf7df48872b6f2d6c1b8d1e60e856ee2cb9b49b4"
	I0120 16:24:13.356352 2177106 cri.go:89] found id: "b74e8d394db408f587f96f3b8a7ae9de869765c7bb66a9d236a7f1e716e1cd4b"
	I0120 16:24:13.356358 2177106 cri.go:89] found id: "64ee06f07bc855fe1284f1d69b9b171ba7501d153a7d5610f399b7f311c0864c"
	I0120 16:24:13.356364 2177106 cri.go:89] found id: "ecbf6d21531691a4513ae29865b1569fc124cd9baef949a5fc43923c966c072d"
	I0120 16:24:13.356369 2177106 cri.go:89] found id: "fd63e8c68b4abfbd325c0a43b759f490848dda6a4fc0c0f13d9d4faf2a22d75f"
	I0120 16:24:13.356373 2177106 cri.go:89] found id: ""
	I0120 16:24:13.356431 2177106 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-162976 -n pause-162976
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-162976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-162976 logs -n 25: (1.629139734s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-417532    | force-systemd-env-417532  | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:20 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-366716         | offline-crio-366716       | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:19 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:20 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-366716         | offline-crio-366716       | jenkins | v1.35.0 | 20 Jan 25 16:19 UTC | 20 Jan 25 16:19 UTC |
	| start   | -p kubernetes-upgrade-207056   | kubernetes-upgrade-207056 | jenkins | v1.35.0 | 20 Jan 25 16:19 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-417532    | force-systemd-env-417532  | jenkins | v1.35.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:20 UTC |
	| start   | -p stopped-upgrade-285935      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-285935 stop    | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p stopped-upgrade-285935      | stopped-upgrade-285935    | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:22 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-383886 sudo    | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:22 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-285935      | stopped-upgrade-285935    | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:22 UTC |
	| ssh     | -p NoKubernetes-383886 sudo    | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:22 UTC |
	| start   | -p pause-162976 --memory=2048  | pause-162976              | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:23 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-054640      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p cert-expiration-448539      | cert-expiration-448539    | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:24 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h        |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-054640      | running-upgrade-054640    | jenkins | v1.35.0 | 20 Jan 25 16:23 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-162976                | pause-162976              | jenkins | v1.35.0 | 20 Jan 25 16:23 UTC | 20 Jan 25 16:24 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-448539      | cert-expiration-448539    | jenkins | v1.35.0 | 20 Jan 25 16:24 UTC | 20 Jan 25 16:24 UTC |
	| start   | -p force-systemd-flag-860028   | force-systemd-flag-860028 | jenkins | v1.35.0 | 20 Jan 25 16:24 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:24:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:24:13.362925 2177297 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:24:13.363614 2177297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:13.363636 2177297 out.go:358] Setting ErrFile to fd 2...
	I0120 16:24:13.363646 2177297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:13.364124 2177297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:24:13.365170 2177297 out.go:352] Setting JSON to false
	I0120 16:24:13.366933 2177297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29199,"bootTime":1737361054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:24:13.367112 2177297 start.go:139] virtualization: kvm guest
	I0120 16:24:13.369091 2177297 out.go:177] * [force-systemd-flag-860028] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:24:13.370652 2177297 notify.go:220] Checking for updates...
	I0120 16:24:13.370732 2177297 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:24:13.371967 2177297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:24:13.373196 2177297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:24:13.374435 2177297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:24:13.375525 2177297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:24:13.376738 2177297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:24:13.378568 2177297 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:24:13.378811 2177297 config.go:182] Loaded profile config "pause-162976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:24:13.378951 2177297 config.go:182] Loaded profile config "running-upgrade-054640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 16:24:13.379115 2177297 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:24:13.419347 2177297 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:24:13.420688 2177297 start.go:297] selected driver: kvm2
	I0120 16:24:13.420710 2177297 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:24:13.420724 2177297 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:24:13.421537 2177297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:24:13.421644 2177297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:24:13.439960 2177297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:24:13.440033 2177297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:24:13.440378 2177297 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 16:24:13.440424 2177297 cni.go:84] Creating CNI manager for ""
	I0120 16:24:13.440502 2177297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:24:13.440514 2177297 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:24:13.440595 2177297 start.go:340] cluster config:
	{Name:force-systemd-flag-860028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-860028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:24:13.440737 2177297 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:24:13.443467 2177297 out.go:177] * Starting "force-systemd-flag-860028" primary control-plane node in "force-systemd-flag-860028" cluster
	I0120 16:24:13.444731 2177297 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:24:13.444802 2177297 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:24:13.444817 2177297 cache.go:56] Caching tarball of preloaded images
	I0120 16:24:13.444927 2177297 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:24:13.444946 2177297 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:24:13.445078 2177297 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/config.json ...
	I0120 16:24:13.445106 2177297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/config.json: {Name:mkee4d10e074b955459615425c1d9684bef2d7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:24:13.445250 2177297 start.go:360] acquireMachinesLock for force-systemd-flag-860028: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:24:13.445282 2177297 start.go:364] duration metric: took 17.192µs to acquireMachinesLock for "force-systemd-flag-860028"
	I0120 16:24:13.445297 2177297 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-860028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-sys
temd-flag-860028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:24:13.445368 2177297 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:24:11.650836 2177106 main.go:141] libmachine: (pause-162976) Calling .GetIP
	I0120 16:24:11.654422 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.654902 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:11.654938 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.655213 2177106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:24:11.661338 2177106 kubeadm.go:883] updating cluster {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:24:11.661462 2177106 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:24:11.661517 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.718025 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.718058 2177106 crio.go:433] Images already preloaded, skipping extraction
	I0120 16:24:11.718135 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.763428 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.763455 2177106 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:24:11.763463 2177106 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.32.0 crio true true} ...
	I0120 16:24:11.763590 2177106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-162976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:24:11.763689 2177106 ssh_runner.go:195] Run: crio config
	I0120 16:24:11.819158 2177106 cni.go:84] Creating CNI manager for ""
	I0120 16:24:11.819189 2177106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:24:11.819202 2177106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:24:11.819235 2177106 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-162976 NodeName:pause-162976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:24:11.819394 2177106 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-162976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.194"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:24:11.819457 2177106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:24:11.835348 2177106 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:24:11.835442 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:24:11.847825 2177106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:24:11.870432 2177106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:24:11.891465 2177106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0120 16:24:11.913231 2177106 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0120 16:24:11.918227 2177106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:24:12.115056 2177106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:24:12.198015 2177106 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976 for IP: 192.168.39.194
	I0120 16:24:12.198054 2177106 certs.go:194] generating shared ca certs ...
	I0120 16:24:12.198078 2177106 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:24:12.198273 2177106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:24:12.198331 2177106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:24:12.198340 2177106 certs.go:256] generating profile certs ...
	I0120 16:24:12.198524 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/client.key
	I0120 16:24:12.198647 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key.687d8b1d
	I0120 16:24:12.198706 2177106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key
	I0120 16:24:12.198864 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:24:12.198913 2177106 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:24:12.198927 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:24:12.198962 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:24:12.198997 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:24:12.199034 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:24:12.199103 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:24:12.199954 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:24:12.393832 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:24:12.488669 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:24:12.583366 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:24:12.684112 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:24:12.752345 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:24:12.791081 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:24:12.827517 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:24:12.875400 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:24:12.905631 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:24:12.956885 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:24:13.006982 2177106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:24:13.042182 2177106 ssh_runner.go:195] Run: openssl version
	I0120 16:24:13.055630 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:24:13.074284 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083081 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083159 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.098157 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:24:13.135167 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:24:13.148529 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154016 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154100 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.160345 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:24:13.170801 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:24:13.186416 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191443 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191519 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.197889 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:24:13.208516 2177106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:24:13.214662 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:24:13.232561 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:24:13.271852 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:24:13.285805 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:24:13.292569 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:24:13.299170 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:24:13.306424 2177106 kubeadm.go:392] StartCluster: {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:24:13.306556 2177106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:24:13.306653 2177106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:24:13.356292 2177106 cri.go:89] found id: "a42038c434d98f1152a5e927fe85b43f9c172aac433828277d7519d5c49ca63a"
	I0120 16:24:13.356334 2177106 cri.go:89] found id: "0fecd72a3609448ae6bbc13fe71867e16380e8c015580111aa27e8172d29d73c"
	I0120 16:24:13.356341 2177106 cri.go:89] found id: "69256c56044f8aa68b61d19abf7df48872b6f2d6c1b8d1e60e856ee2cb9b49b4"
	I0120 16:24:13.356352 2177106 cri.go:89] found id: "b74e8d394db408f587f96f3b8a7ae9de869765c7bb66a9d236a7f1e716e1cd4b"
	I0120 16:24:13.356358 2177106 cri.go:89] found id: "64ee06f07bc855fe1284f1d69b9b171ba7501d153a7d5610f399b7f311c0864c"
	I0120 16:24:13.356364 2177106 cri.go:89] found id: "ecbf6d21531691a4513ae29865b1569fc124cd9baef949a5fc43923c966c072d"
	I0120 16:24:13.356369 2177106 cri.go:89] found id: "fd63e8c68b4abfbd325c0a43b759f490848dda6a4fc0c0f13d9d4faf2a22d75f"
	I0120 16:24:13.356373 2177106 cri.go:89] found id: ""
	I0120 16:24:13.356431 2177106 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-162976 -n pause-162976
helpers_test.go:261: (dbg) Run:  kubectl --context pause-162976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-162976 -n pause-162976
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-162976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-162976 logs -n 25: (1.624155313s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-417532    | force-systemd-env-417532  | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:20 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-366716         | offline-crio-366716       | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:19 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:18 UTC | 20 Jan 25 16:20 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-366716         | offline-crio-366716       | jenkins | v1.35.0 | 20 Jan 25 16:19 UTC | 20 Jan 25 16:19 UTC |
	| start   | -p kubernetes-upgrade-207056   | kubernetes-upgrade-207056 | jenkins | v1.35.0 | 20 Jan 25 16:19 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-417532    | force-systemd-env-417532  | jenkins | v1.35.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:20 UTC |
	| start   | -p stopped-upgrade-285935      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:20 UTC | 20 Jan 25 16:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-285935 stop    | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p stopped-upgrade-285935      | stopped-upgrade-285935    | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:22 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-383886 sudo    | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:21 UTC |
	| start   | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:21 UTC | 20 Jan 25 16:22 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-285935      | stopped-upgrade-285935    | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:22 UTC |
	| ssh     | -p NoKubernetes-383886 sudo    | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-383886         | NoKubernetes-383886       | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:22 UTC |
	| start   | -p pause-162976 --memory=2048  | pause-162976              | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:23 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-054640      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p cert-expiration-448539      | cert-expiration-448539    | jenkins | v1.35.0 | 20 Jan 25 16:22 UTC | 20 Jan 25 16:24 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h        |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-054640      | running-upgrade-054640    | jenkins | v1.35.0 | 20 Jan 25 16:23 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-162976                | pause-162976              | jenkins | v1.35.0 | 20 Jan 25 16:23 UTC | 20 Jan 25 16:24 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-448539      | cert-expiration-448539    | jenkins | v1.35.0 | 20 Jan 25 16:24 UTC | 20 Jan 25 16:24 UTC |
	| start   | -p force-systemd-flag-860028   | force-systemd-flag-860028 | jenkins | v1.35.0 | 20 Jan 25 16:24 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:24:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:24:13.362925 2177297 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:24:13.363614 2177297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:13.363636 2177297 out.go:358] Setting ErrFile to fd 2...
	I0120 16:24:13.363646 2177297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:13.364124 2177297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:24:13.365170 2177297 out.go:352] Setting JSON to false
	I0120 16:24:13.366933 2177297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29199,"bootTime":1737361054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:24:13.367112 2177297 start.go:139] virtualization: kvm guest
	I0120 16:24:13.369091 2177297 out.go:177] * [force-systemd-flag-860028] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:24:13.370652 2177297 notify.go:220] Checking for updates...
	I0120 16:24:13.370732 2177297 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:24:13.371967 2177297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:24:13.373196 2177297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:24:13.374435 2177297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:24:13.375525 2177297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:24:13.376738 2177297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:24:13.378568 2177297 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:24:13.378811 2177297 config.go:182] Loaded profile config "pause-162976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:24:13.378951 2177297 config.go:182] Loaded profile config "running-upgrade-054640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 16:24:13.379115 2177297 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:24:13.419347 2177297 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:24:13.420688 2177297 start.go:297] selected driver: kvm2
	I0120 16:24:13.420710 2177297 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:24:13.420724 2177297 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:24:13.421537 2177297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:24:13.421644 2177297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:24:13.439960 2177297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:24:13.440033 2177297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:24:13.440378 2177297 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 16:24:13.440424 2177297 cni.go:84] Creating CNI manager for ""
	I0120 16:24:13.440502 2177297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:24:13.440514 2177297 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:24:13.440595 2177297 start.go:340] cluster config:
	{Name:force-systemd-flag-860028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-860028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:24:13.440737 2177297 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:24:13.443467 2177297 out.go:177] * Starting "force-systemd-flag-860028" primary control-plane node in "force-systemd-flag-860028" cluster
	I0120 16:24:13.444731 2177297 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:24:13.444802 2177297 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:24:13.444817 2177297 cache.go:56] Caching tarball of preloaded images
	I0120 16:24:13.444927 2177297 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:24:13.444946 2177297 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:24:13.445078 2177297 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/config.json ...
	I0120 16:24:13.445106 2177297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/config.json: {Name:mkee4d10e074b955459615425c1d9684bef2d7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:24:13.445250 2177297 start.go:360] acquireMachinesLock for force-systemd-flag-860028: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:24:13.445282 2177297 start.go:364] duration metric: took 17.192µs to acquireMachinesLock for "force-systemd-flag-860028"
	I0120 16:24:13.445297 2177297 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-860028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-sys
temd-flag-860028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:24:13.445368 2177297 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:24:11.650836 2177106 main.go:141] libmachine: (pause-162976) Calling .GetIP
	I0120 16:24:11.654422 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.654902 2177106 main.go:141] libmachine: (pause-162976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:81:0a", ip: ""} in network mk-pause-162976: {Iface:virbr1 ExpiryTime:2025-01-20 17:22:50 +0000 UTC Type:0 Mac:52:54:00:45:81:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-162976 Clientid:01:52:54:00:45:81:0a}
	I0120 16:24:11.654938 2177106 main.go:141] libmachine: (pause-162976) DBG | domain pause-162976 has defined IP address 192.168.39.194 and MAC address 52:54:00:45:81:0a in network mk-pause-162976
	I0120 16:24:11.655213 2177106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:24:11.661338 2177106 kubeadm.go:883] updating cluster {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:24:11.661462 2177106 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:24:11.661517 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.718025 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.718058 2177106 crio.go:433] Images already preloaded, skipping extraction
	I0120 16:24:11.718135 2177106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:24:11.763428 2177106 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:24:11.763455 2177106 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:24:11.763463 2177106 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.32.0 crio true true} ...
	I0120 16:24:11.763590 2177106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-162976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:24:11.763689 2177106 ssh_runner.go:195] Run: crio config
	I0120 16:24:11.819158 2177106 cni.go:84] Creating CNI manager for ""
	I0120 16:24:11.819189 2177106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:24:11.819202 2177106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:24:11.819235 2177106 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-162976 NodeName:pause-162976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:24:11.819394 2177106 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-162976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.194"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:24:11.819457 2177106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:24:11.835348 2177106 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:24:11.835442 2177106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:24:11.847825 2177106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:24:11.870432 2177106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:24:11.891465 2177106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0120 16:24:11.913231 2177106 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0120 16:24:11.918227 2177106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:24:12.115056 2177106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:24:12.198015 2177106 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976 for IP: 192.168.39.194
	I0120 16:24:12.198054 2177106 certs.go:194] generating shared ca certs ...
	I0120 16:24:12.198078 2177106 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:24:12.198273 2177106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:24:12.198331 2177106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:24:12.198340 2177106 certs.go:256] generating profile certs ...
	I0120 16:24:12.198524 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/client.key
	I0120 16:24:12.198647 2177106 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key.687d8b1d
	I0120 16:24:12.198706 2177106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key
	I0120 16:24:12.198864 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:24:12.198913 2177106 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:24:12.198927 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:24:12.198962 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:24:12.198997 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:24:12.199034 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:24:12.199103 2177106 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:24:12.199954 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:24:12.393832 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:24:12.488669 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:24:12.583366 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:24:12.684112 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:24:12.752345 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:24:12.791081 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:24:12.827517 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/pause-162976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:24:12.875400 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:24:12.905631 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:24:12.956885 2177106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:24:13.006982 2177106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:24:13.042182 2177106 ssh_runner.go:195] Run: openssl version
	I0120 16:24:13.055630 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:24:13.074284 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083081 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.083159 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:24:13.098157 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:24:13.135167 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:24:13.148529 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154016 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.154100 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:24:13.160345 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:24:13.170801 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:24:13.186416 2177106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191443 2177106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.191519 2177106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:24:13.197889 2177106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:24:13.208516 2177106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:24:13.214662 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:24:13.232561 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:24:13.271852 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:24:13.285805 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:24:13.292569 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:24:13.299170 2177106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:24:13.306424 2177106 kubeadm.go:392] StartCluster: {Name:pause-162976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-162976 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:24:13.306556 2177106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:24:13.306653 2177106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:24:13.356292 2177106 cri.go:89] found id: "a42038c434d98f1152a5e927fe85b43f9c172aac433828277d7519d5c49ca63a"
	I0120 16:24:13.356334 2177106 cri.go:89] found id: "0fecd72a3609448ae6bbc13fe71867e16380e8c015580111aa27e8172d29d73c"
	I0120 16:24:13.356341 2177106 cri.go:89] found id: "69256c56044f8aa68b61d19abf7df48872b6f2d6c1b8d1e60e856ee2cb9b49b4"
	I0120 16:24:13.356352 2177106 cri.go:89] found id: "b74e8d394db408f587f96f3b8a7ae9de869765c7bb66a9d236a7f1e716e1cd4b"
	I0120 16:24:13.356358 2177106 cri.go:89] found id: "64ee06f07bc855fe1284f1d69b9b171ba7501d153a7d5610f399b7f311c0864c"
	I0120 16:24:13.356364 2177106 cri.go:89] found id: "ecbf6d21531691a4513ae29865b1569fc124cd9baef949a5fc43923c966c072d"
	I0120 16:24:13.356369 2177106 cri.go:89] found id: "fd63e8c68b4abfbd325c0a43b759f490848dda6a4fc0c0f13d9d4faf2a22d75f"
	I0120 16:24:13.356373 2177106 cri.go:89] found id: ""
	I0120 16:24:13.356431 2177106 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-162976 -n pause-162976
helpers_test.go:261: (dbg) Run:  kubectl --context pause-162976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (309.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
I0120 16:25:05.880239 2136749 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate858879667/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000900cf0 gz:0xc000900cf8 tar:0xc000900c90 tar.bz2:0xc000900cb0 tar.gz:0xc000900cc0 tar.xz:0xc000900cd0 tar.zst:0xc000900ce0 tbz2:0xc000900cb0 tgz:0xc000900cc0 txz:0xc000900cd0 tzst:0xc000900ce0 xz:0xc000900d00 zip:0xc000900d10 zst:0xc000900d08] Getters:map[file:0xc0019a2450 http:0xc00093a230 https:0xc00093a280] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 16:25:05.880305 2136749 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate858879667/001/docker-machine-driver-kvm2
I0120 16:25:07.418114 2136749 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 16:25:08.721352 2136749 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 16:25:08.755790 2136749 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 16:25:08.755833 2136749 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 16:25:08.755917 2136749 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 16:25:08.755959 2136749 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate858879667/002/docker-machine-driver-kvm2
I0120 16:25:08.922980 2136749 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate858879667/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000900cf0 gz:0xc000900cf8 tar:0xc000900c90 tar.bz2:0xc000900cb0 tar.gz:0xc000900cc0 tar.xz:0xc000900cd0 tar.zst:0xc000900ce0 tbz2:0xc000900cb0 tgz:0xc000900cc0 txz:0xc000900cd0 tzst:0xc000900ce0 xz:0xc000900d00 zip:0xc000900d10 zst:0xc000900d08] Getters:map[file:0xc001b25680 http:0xc000074500 https:0xc000074550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 16:25:08.923056 2136749 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate858879667/002/docker-machine-driver-kvm2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m9.454479462s)

                                                
                                                
-- stdout --
	* [old-k8s-version-806597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Downloading driver docker-machine-driver-kvm2:
	* Starting "old-k8s-version-806597" primary control-plane node in "old-k8s-version-806597" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:25:05.837628 2180315 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:25:05.837791 2180315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:25:05.837804 2180315 out.go:358] Setting ErrFile to fd 2...
	I0120 16:25:05.837811 2180315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:25:05.838012 2180315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:25:05.838736 2180315 out.go:352] Setting JSON to false
	I0120 16:25:05.839771 2180315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29252,"bootTime":1737361054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:25:05.839894 2180315 start.go:139] virtualization: kvm guest
	I0120 16:25:05.841986 2180315 out.go:177] * [old-k8s-version-806597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:25:05.843492 2180315 notify.go:220] Checking for updates...
	I0120 16:25:05.843536 2180315 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:25:05.844907 2180315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:25:05.846222 2180315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:25:05.847562 2180315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:25:05.849239 2180315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:25:05.850929 2180315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:25:05.853205 2180315 config.go:182] Loaded profile config "cert-options-435922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:25:05.853374 2180315 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:25:05.853522 2180315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:25:05.894997 2180315 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:25:05.896321 2180315 start.go:297] selected driver: kvm2
	I0120 16:25:05.896339 2180315 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:25:05.896354 2180315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:25:05.897076 2180315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:25:07.418171 2180315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0120 16:25:07.454727 2180315 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0120 16:25:07.457201 2180315 out.go:177] * Downloading driver docker-machine-driver-kvm2:
	I0120 16:25:07.458872 2180315 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.35.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.35.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:25:08.721227 2180315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:25:08.721638 2180315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:25:08.721694 2180315 cni.go:84] Creating CNI manager for ""
	I0120 16:25:08.721753 2180315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:25:08.721766 2180315 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:25:08.721860 2180315 start.go:340] cluster config:
	{Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:25:08.722006 2180315 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:25:08.724471 2180315 out.go:177] * Starting "old-k8s-version-806597" primary control-plane node in "old-k8s-version-806597" cluster
	I0120 16:25:08.725826 2180315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:25:08.725911 2180315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:25:08.725931 2180315 cache.go:56] Caching tarball of preloaded images
	I0120 16:25:08.726067 2180315 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:25:08.726094 2180315 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 16:25:08.726250 2180315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json ...
	I0120 16:25:08.726281 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json: {Name:mk641e3a07e3563b1e7003600696758bdf208883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:25:08.726492 2180315 start.go:360] acquireMachinesLock for old-k8s-version-806597: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:25:42.619917 2180315 start.go:364] duration metric: took 33.893308678s to acquireMachinesLock for "old-k8s-version-806597"
	I0120 16:25:42.620010 2180315 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:25:42.620148 2180315 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:25:42.622408 2180315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 16:25:42.622587 2180315 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:25:42.622673 2180315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:25:42.640124 2180315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0120 16:25:42.640566 2180315 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:25:42.641206 2180315 main.go:141] libmachine: Using API Version  1
	I0120 16:25:42.641229 2180315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:25:42.641621 2180315 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:25:42.641851 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:25:42.642013 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:25:42.642166 2180315 start.go:159] libmachine.API.Create for "old-k8s-version-806597" (driver="kvm2")
	I0120 16:25:42.642197 2180315 client.go:168] LocalClient.Create starting
	I0120 16:25:42.642234 2180315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:25:42.642280 2180315 main.go:141] libmachine: Decoding PEM data...
	I0120 16:25:42.642308 2180315 main.go:141] libmachine: Parsing certificate...
	I0120 16:25:42.642390 2180315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:25:42.642415 2180315 main.go:141] libmachine: Decoding PEM data...
	I0120 16:25:42.642435 2180315 main.go:141] libmachine: Parsing certificate...
	I0120 16:25:42.642462 2180315 main.go:141] libmachine: Running pre-create checks...
	I0120 16:25:42.642483 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .PreCreateCheck
	I0120 16:25:42.642825 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:25:42.643320 2180315 main.go:141] libmachine: Creating machine...
	I0120 16:25:42.643336 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .Create
	I0120 16:25:42.643494 2180315 main.go:141] libmachine: (old-k8s-version-806597) creating KVM machine...
	I0120 16:25:42.643518 2180315 main.go:141] libmachine: (old-k8s-version-806597) creating network...
	I0120 16:25:42.644737 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found existing default KVM network
	I0120 16:25:42.646096 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:42.645927 2180805 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bb:28:fa} reservation:<nil>}
	I0120 16:25:42.647463 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:42.647336 2180805 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000302000}
	I0120 16:25:42.647571 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | created network xml: 
	I0120 16:25:42.647592 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | <network>
	I0120 16:25:42.647626 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   <name>mk-old-k8s-version-806597</name>
	I0120 16:25:42.647663 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   <dns enable='no'/>
	I0120 16:25:42.647688 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   
	I0120 16:25:42.647700 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0120 16:25:42.647713 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |     <dhcp>
	I0120 16:25:42.647733 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0120 16:25:42.647751 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |     </dhcp>
	I0120 16:25:42.647768 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   </ip>
	I0120 16:25:42.647786 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG |   
	I0120 16:25:42.647799 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | </network>
	I0120 16:25:42.647821 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | 
	I0120 16:25:42.653220 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | trying to create private KVM network mk-old-k8s-version-806597 192.168.50.0/24...
	I0120 16:25:42.726288 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | private KVM network mk-old-k8s-version-806597 192.168.50.0/24 created
	I0120 16:25:42.726327 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597 ...
	I0120 16:25:42.726342 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:42.726210 2180805 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:25:42.726389 2180315 main.go:141] libmachine: (old-k8s-version-806597) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:25:42.726429 2180315 main.go:141] libmachine: (old-k8s-version-806597) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:25:43.014915 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:43.014737 2180805 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa...
	I0120 16:25:43.181377 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:43.181227 2180805 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/old-k8s-version-806597.rawdisk...
	I0120 16:25:43.181408 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Writing magic tar header
	I0120 16:25:43.181422 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Writing SSH key tar header
	I0120 16:25:43.181431 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:43.181348 2180805 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597 ...
	I0120 16:25:43.181449 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597
	I0120 16:25:43.181527 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597 (perms=drwx------)
	I0120 16:25:43.181585 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:25:43.181602 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:25:43.181618 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:25:43.181634 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:25:43.181645 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:25:43.181657 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:25:43.181664 2180315 main.go:141] libmachine: (old-k8s-version-806597) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:25:43.181676 2180315 main.go:141] libmachine: (old-k8s-version-806597) creating domain...
	I0120 16:25:43.181690 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:25:43.181699 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:25:43.181709 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home/jenkins
	I0120 16:25:43.181726 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | checking permissions on dir: /home
	I0120 16:25:43.181755 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | skipping /home - not owner
	I0120 16:25:43.182884 2180315 main.go:141] libmachine: (old-k8s-version-806597) define libvirt domain using xml: 
	I0120 16:25:43.182909 2180315 main.go:141] libmachine: (old-k8s-version-806597) <domain type='kvm'>
	I0120 16:25:43.182945 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <name>old-k8s-version-806597</name>
	I0120 16:25:43.182977 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <memory unit='MiB'>2200</memory>
	I0120 16:25:43.183001 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <vcpu>2</vcpu>
	I0120 16:25:43.183017 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <features>
	I0120 16:25:43.183043 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <acpi/>
	I0120 16:25:43.183058 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <apic/>
	I0120 16:25:43.183070 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <pae/>
	I0120 16:25:43.183079 2180315 main.go:141] libmachine: (old-k8s-version-806597)     
	I0120 16:25:43.183086 2180315 main.go:141] libmachine: (old-k8s-version-806597)   </features>
	I0120 16:25:43.183095 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <cpu mode='host-passthrough'>
	I0120 16:25:43.183104 2180315 main.go:141] libmachine: (old-k8s-version-806597)   
	I0120 16:25:43.183118 2180315 main.go:141] libmachine: (old-k8s-version-806597)   </cpu>
	I0120 16:25:43.183127 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <os>
	I0120 16:25:43.183141 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <type>hvm</type>
	I0120 16:25:43.183153 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <boot dev='cdrom'/>
	I0120 16:25:43.183163 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <boot dev='hd'/>
	I0120 16:25:43.183172 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <bootmenu enable='no'/>
	I0120 16:25:43.183181 2180315 main.go:141] libmachine: (old-k8s-version-806597)   </os>
	I0120 16:25:43.183248 2180315 main.go:141] libmachine: (old-k8s-version-806597)   <devices>
	I0120 16:25:43.183275 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <disk type='file' device='cdrom'>
	I0120 16:25:43.183292 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/boot2docker.iso'/>
	I0120 16:25:43.183308 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <target dev='hdc' bus='scsi'/>
	I0120 16:25:43.183323 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <readonly/>
	I0120 16:25:43.183336 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </disk>
	I0120 16:25:43.183351 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <disk type='file' device='disk'>
	I0120 16:25:43.183370 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:25:43.183399 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/old-k8s-version-806597.rawdisk'/>
	I0120 16:25:43.183417 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <target dev='hda' bus='virtio'/>
	I0120 16:25:43.183432 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </disk>
	I0120 16:25:43.183445 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <interface type='network'>
	I0120 16:25:43.183460 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <source network='mk-old-k8s-version-806597'/>
	I0120 16:25:43.183474 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <model type='virtio'/>
	I0120 16:25:43.183484 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </interface>
	I0120 16:25:43.183502 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <interface type='network'>
	I0120 16:25:43.183516 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <source network='default'/>
	I0120 16:25:43.183529 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <model type='virtio'/>
	I0120 16:25:43.183543 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </interface>
	I0120 16:25:43.183556 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <serial type='pty'>
	I0120 16:25:43.183571 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <target port='0'/>
	I0120 16:25:43.183599 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </serial>
	I0120 16:25:43.183614 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <console type='pty'>
	I0120 16:25:43.183627 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <target type='serial' port='0'/>
	I0120 16:25:43.183640 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </console>
	I0120 16:25:43.183652 2180315 main.go:141] libmachine: (old-k8s-version-806597)     <rng model='virtio'>
	I0120 16:25:43.183668 2180315 main.go:141] libmachine: (old-k8s-version-806597)       <backend model='random'>/dev/random</backend>
	I0120 16:25:43.183686 2180315 main.go:141] libmachine: (old-k8s-version-806597)     </rng>
	I0120 16:25:43.183720 2180315 main.go:141] libmachine: (old-k8s-version-806597)     
	I0120 16:25:43.183745 2180315 main.go:141] libmachine: (old-k8s-version-806597)     
	I0120 16:25:43.183766 2180315 main.go:141] libmachine: (old-k8s-version-806597)   </devices>
	I0120 16:25:43.183776 2180315 main.go:141] libmachine: (old-k8s-version-806597) </domain>
	I0120 16:25:43.183786 2180315 main.go:141] libmachine: (old-k8s-version-806597) 
	I0120 16:25:43.187955 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:e6:ef:cc in network default
	I0120 16:25:43.188844 2180315 main.go:141] libmachine: (old-k8s-version-806597) starting domain...
	I0120 16:25:43.188867 2180315 main.go:141] libmachine: (old-k8s-version-806597) ensuring networks are active...
	I0120 16:25:43.188880 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:43.189494 2180315 main.go:141] libmachine: (old-k8s-version-806597) Ensuring network default is active
	I0120 16:25:43.189834 2180315 main.go:141] libmachine: (old-k8s-version-806597) Ensuring network mk-old-k8s-version-806597 is active
	I0120 16:25:43.190457 2180315 main.go:141] libmachine: (old-k8s-version-806597) getting domain XML...
	I0120 16:25:43.191329 2180315 main.go:141] libmachine: (old-k8s-version-806597) creating domain...
	I0120 16:25:44.541075 2180315 main.go:141] libmachine: (old-k8s-version-806597) waiting for IP...
	I0120 16:25:44.542385 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:44.543129 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:44.543228 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:44.543143 2180805 retry.go:31] will retry after 254.970151ms: waiting for domain to come up
	I0120 16:25:44.799828 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:44.800385 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:44.800421 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:44.800354 2180805 retry.go:31] will retry after 235.04601ms: waiting for domain to come up
	I0120 16:25:45.036966 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:45.037613 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:45.037650 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:45.037542 2180805 retry.go:31] will retry after 430.932107ms: waiting for domain to come up
	I0120 16:25:45.470354 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:45.470937 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:45.470967 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:45.470909 2180805 retry.go:31] will retry after 389.110428ms: waiting for domain to come up
	I0120 16:25:45.861753 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:45.862231 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:45.862278 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:45.862211 2180805 retry.go:31] will retry after 676.514778ms: waiting for domain to come up
	I0120 16:25:46.539940 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:46.540451 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:46.540506 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:46.540422 2180805 retry.go:31] will retry after 816.499753ms: waiting for domain to come up
	I0120 16:25:47.358826 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:47.359471 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:47.359507 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:47.359448 2180805 retry.go:31] will retry after 737.471448ms: waiting for domain to come up
	I0120 16:25:48.099136 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:48.099707 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:48.099734 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:48.099700 2180805 retry.go:31] will retry after 1.070431059s: waiting for domain to come up
	I0120 16:25:49.171358 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:49.171819 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:49.171848 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:49.171808 2180805 retry.go:31] will retry after 1.246867848s: waiting for domain to come up
	I0120 16:25:50.420268 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:50.420764 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:50.420793 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:50.420729 2180805 retry.go:31] will retry after 1.733327189s: waiting for domain to come up
	I0120 16:25:52.155894 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:52.156471 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:52.156502 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:52.156435 2180805 retry.go:31] will retry after 2.553206208s: waiting for domain to come up
	I0120 16:25:54.712049 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:54.712540 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:54.712570 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:54.712514 2180805 retry.go:31] will retry after 3.290775988s: waiting for domain to come up
	I0120 16:25:58.005276 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:25:58.005642 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:25:58.005672 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:25:58.005578 2180805 retry.go:31] will retry after 3.942795342s: waiting for domain to come up
	I0120 16:26:01.951815 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:01.952324 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:26:01.952385 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:26:01.952291 2180805 retry.go:31] will retry after 4.071957347s: waiting for domain to come up
	I0120 16:26:06.028371 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.029058 2180315 main.go:141] libmachine: (old-k8s-version-806597) found domain IP: 192.168.50.241
	I0120 16:26:06.029085 2180315 main.go:141] libmachine: (old-k8s-version-806597) reserving static IP address...
	I0120 16:26:06.029117 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has current primary IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.029522 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-806597", mac: "52:54:00:02:1a:c1", ip: "192.168.50.241"} in network mk-old-k8s-version-806597
	I0120 16:26:06.116699 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Getting to WaitForSSH function...
	I0120 16:26:06.116728 2180315 main.go:141] libmachine: (old-k8s-version-806597) reserved static IP address 192.168.50.241 for domain old-k8s-version-806597
	I0120 16:26:06.116741 2180315 main.go:141] libmachine: (old-k8s-version-806597) waiting for SSH...
	I0120 16:26:06.119622 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.120095 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.120129 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.120303 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH client type: external
	I0120 16:26:06.120335 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa (-rw-------)
	I0120 16:26:06.120409 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:26:06.120431 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | About to run SSH command:
	I0120 16:26:06.120441 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | exit 0
	I0120 16:26:06.251348 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | SSH cmd err, output: <nil>: 
	I0120 16:26:06.251687 2180315 main.go:141] libmachine: (old-k8s-version-806597) KVM machine creation complete
	I0120 16:26:06.251979 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:26:06.252739 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:06.253017 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:06.253247 2180315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:26:06.253261 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetState
	I0120 16:26:06.254645 2180315 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:26:06.254663 2180315 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:26:06.254669 2180315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:26:06.254675 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.257222 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.257574 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.257620 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.257743 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.257991 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.258208 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.258391 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.258571 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.258830 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.258847 2180315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:26:06.370228 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:06.370253 2180315 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:26:06.370261 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.373176 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.373546 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.373570 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.373778 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.374027 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.374191 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.374367 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.374532 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.374772 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.374787 2180315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:26:06.487716 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:26:06.487797 2180315 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:26:06.487812 2180315 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:26:06.487822 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.488060 2180315 buildroot.go:166] provisioning hostname "old-k8s-version-806597"
	I0120 16:26:06.488102 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.488215 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.491033 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.491430 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.491461 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.491618 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.491798 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.491958 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.492108 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.492320 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.492530 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.492559 2180315 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-806597 && echo "old-k8s-version-806597" | sudo tee /etc/hostname
	I0120 16:26:06.622625 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-806597
	
	I0120 16:26:06.622683 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.625695 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.626064 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.626093 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.626318 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:06.626542 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.626719 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:06.626838 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:06.627006 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:06.627249 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:06.627268 2180315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-806597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-806597/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-806597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:26:06.750870 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:26:06.750917 2180315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:26:06.750943 2180315 buildroot.go:174] setting up certificates
	I0120 16:26:06.750959 2180315 provision.go:84] configureAuth start
	I0120 16:26:06.750979 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:26:06.751306 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:06.754453 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.754849 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.754886 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.755018 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:06.757590 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.757935 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:06.757965 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:06.758172 2180315 provision.go:143] copyHostCerts
	I0120 16:26:06.758244 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:26:06.758258 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:26:06.758329 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:26:06.758465 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:26:06.758476 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:26:06.758501 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:26:06.758594 2180315 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:26:06.758623 2180315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:26:06.758655 2180315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:26:06.758745 2180315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-806597 san=[127.0.0.1 192.168.50.241 localhost minikube old-k8s-version-806597]
	I0120 16:26:07.098838 2180315 provision.go:177] copyRemoteCerts
	I0120 16:26:07.098934 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:26:07.098970 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.101838 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.102155 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.102181 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.102361 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.102576 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.102760 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.102866 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.190087 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:26:07.214751 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 16:26:07.240207 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:26:07.266308 2180315 provision.go:87] duration metric: took 515.329527ms to configureAuth
	I0120 16:26:07.266342 2180315 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:26:07.266540 2180315 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:26:07.266653 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.269551 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.269905 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.269938 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.270096 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.270354 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.270557 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.270747 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.270943 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:07.271130 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:07.271146 2180315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:26:07.509188 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:26:07.509227 2180315 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:26:07.509241 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetURL
	I0120 16:26:07.510661 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | using libvirt version 6000000
	I0120 16:26:07.513007 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.513374 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.513417 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.513621 2180315 main.go:141] libmachine: Docker is up and running!
	I0120 16:26:07.513643 2180315 main.go:141] libmachine: Reticulating splines...
	I0120 16:26:07.513672 2180315 client.go:171] duration metric: took 24.871443012s to LocalClient.Create
	I0120 16:26:07.513703 2180315 start.go:167] duration metric: took 24.87153796s to libmachine.API.Create "old-k8s-version-806597"
	I0120 16:26:07.513715 2180315 start.go:293] postStartSetup for "old-k8s-version-806597" (driver="kvm2")
	I0120 16:26:07.513729 2180315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:26:07.513749 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.514044 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:26:07.514072 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.516543 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.516855 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.516880 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.517133 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.517362 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.517578 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.517719 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.606722 2180315 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:26:07.611484 2180315 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:26:07.611519 2180315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:26:07.611602 2180315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:26:07.611675 2180315 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:26:07.611800 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:26:07.621801 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:07.648048 2180315 start.go:296] duration metric: took 134.299886ms for postStartSetup
	I0120 16:26:07.648126 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:26:07.648839 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:07.651615 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.651998 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.652026 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.652217 2180315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json ...
	I0120 16:26:07.652489 2180315 start.go:128] duration metric: took 25.03232575s to createHost
	I0120 16:26:07.652518 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.654957 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.655361 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.655389 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.655513 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.655746 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.655900 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.656069 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.656222 2180315 main.go:141] libmachine: Using SSH client type: native
	I0120 16:26:07.656387 2180315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:26:07.656397 2180315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:26:07.771640 2180315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390367.742600560
	
	I0120 16:26:07.771676 2180315 fix.go:216] guest clock: 1737390367.742600560
	I0120 16:26:07.771684 2180315 fix.go:229] Guest: 2025-01-20 16:26:07.74260056 +0000 UTC Remote: 2025-01-20 16:26:07.652504125 +0000 UTC m=+61.859229819 (delta=90.096435ms)
	I0120 16:26:07.771709 2180315 fix.go:200] guest clock delta is within tolerance: 90.096435ms
	I0120 16:26:07.771717 2180315 start.go:83] releasing machines lock for "old-k8s-version-806597", held for 25.151752748s
	I0120 16:26:07.771752 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.772033 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:07.775217 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.775707 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.775749 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.776022 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.776781 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.777036 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:26:07.777158 2180315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:26:07.777222 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.777536 2180315 ssh_runner.go:195] Run: cat /version.json
	I0120 16:26:07.777560 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:26:07.780250 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780595 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780643 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.780675 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780825 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.780957 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:07.780981 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:07.780990 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.781150 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:26:07.781157 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.781323 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:26:07.781315 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.781494 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:26:07.781648 2180315 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:26:07.864771 2180315 ssh_runner.go:195] Run: systemctl --version
	I0120 16:26:07.892210 2180315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:26:08.068848 2180315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:26:08.075723 2180315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:26:08.075810 2180315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:26:08.093929 2180315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:26:08.093977 2180315 start.go:495] detecting cgroup driver to use...
	I0120 16:26:08.094099 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:26:08.118924 2180315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:26:08.139602 2180315 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:26:08.139676 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:26:08.155249 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:26:08.170466 2180315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:26:08.298339 2180315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:26:08.482682 2180315 docker.go:233] disabling docker service ...
	I0120 16:26:08.482763 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:26:08.503903 2180315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:26:08.517728 2180315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:26:08.655481 2180315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:26:08.810102 2180315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:26:08.825925 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:26:08.846193 2180315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 16:26:08.846277 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.857437 2180315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:26:08.857539 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.869364 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.881019 2180315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:26:08.892614 2180315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:26:08.904669 2180315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:26:08.915983 2180315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:26:08.916058 2180315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:26:08.933201 2180315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:26:08.943616 2180315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:09.105895 2180315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:26:09.227633 2180315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:26:09.227733 2180315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:26:09.235332 2180315 start.go:563] Will wait 60s for crictl version
	I0120 16:26:09.235428 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:09.240095 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:26:09.292885 2180315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:26:09.293039 2180315 ssh_runner.go:195] Run: crio --version
	I0120 16:26:09.324814 2180315 ssh_runner.go:195] Run: crio --version
	I0120 16:26:09.358115 2180315 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 16:26:09.359602 2180315 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:26:09.363063 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:09.363567 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:25:59 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:26:09.363603 2180315 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:26:09.363883 2180315 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 16:26:09.368562 2180315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:26:09.381987 2180315 kubeadm.go:883] updating cluster {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:26:09.382129 2180315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:26:09.382187 2180315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:09.419218 2180315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:26:09.419289 2180315 ssh_runner.go:195] Run: which lz4
	I0120 16:26:09.423711 2180315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:26:09.428502 2180315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:26:09.428538 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 16:26:11.255602 2180315 crio.go:462] duration metric: took 1.831932276s to copy over tarball
	I0120 16:26:11.255721 2180315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:26:13.958662 2180315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.702897487s)
	I0120 16:26:13.958696 2180315 crio.go:469] duration metric: took 2.703057594s to extract the tarball
	I0120 16:26:13.958704 2180315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:26:14.003804 2180315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:26:14.054640 2180315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:26:14.054677 2180315 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:26:14.054745 2180315 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.054793 2180315 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.054813 2180315 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 16:26:14.054842 2180315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.054874 2180315 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.054843 2180315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.054852 2180315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.054796 2180315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.056197 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.056442 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.056458 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.056501 2180315 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.056442 2180315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.056446 2180315 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.056571 2180315 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 16:26:14.056504 2180315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.255911 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.261015 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.271871 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 16:26:14.283998 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.287187 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.299760 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.329076 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.396160 2180315 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 16:26:14.396234 2180315 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.396338 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.396341 2180315 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 16:26:14.396404 2180315 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.396461 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.426212 2180315 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 16:26:14.426273 2180315 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 16:26:14.426342 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.445345 2180315 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 16:26:14.445399 2180315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.445455 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.470555 2180315 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 16:26:14.470624 2180315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.470701 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.475886 2180315 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 16:26:14.475959 2180315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.476032 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.483228 2180315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:26:14.485259 2180315 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 16:26:14.485316 2180315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.485360 2180315 ssh_runner.go:195] Run: which crictl
	I0120 16:26:14.485370 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.485361 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.485396 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.485433 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.485496 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.485502 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.771462 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.771604 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.771626 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.771678 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.771745 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:14.771763 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.771811 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.948549 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:26:14.948568 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:26:14.948595 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:14.948653 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:26:14.948759 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:26:14.948773 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:26:14.948838 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:26:15.113088 2180315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:26:15.113170 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 16:26:15.113333 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 16:26:15.113423 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 16:26:15.113461 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 16:26:15.113517 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 16:26:15.113585 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 16:26:15.149469 2180315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 16:26:15.149576 2180315 cache_images.go:92] duration metric: took 1.094879778s to LoadCachedImages
	W0120 16:26:15.149665 2180315 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0120 16:26:15.149685 2180315 kubeadm.go:934] updating node { 192.168.50.241 8443 v1.20.0 crio true true} ...
	I0120 16:26:15.149874 2180315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-806597 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:26:15.149965 2180315 ssh_runner.go:195] Run: crio config
	I0120 16:26:15.210239 2180315 cni.go:84] Creating CNI manager for ""
	I0120 16:26:15.210274 2180315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:26:15.210289 2180315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:26:15.210311 2180315 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.241 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-806597 NodeName:old-k8s-version-806597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 16:26:15.210498 2180315 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-806597"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:26:15.210588 2180315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 16:26:15.222433 2180315 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:26:15.222528 2180315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:26:15.234452 2180315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 16:26:15.253710 2180315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:26:15.274391 2180315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 16:26:15.295494 2180315 ssh_runner.go:195] Run: grep 192.168.50.241	control-plane.minikube.internal$ /etc/hosts
	I0120 16:26:15.300358 2180315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:26:15.314207 2180315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:26:15.445942 2180315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:26:15.467879 2180315 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597 for IP: 192.168.50.241
	I0120 16:26:15.467911 2180315 certs.go:194] generating shared ca certs ...
	I0120 16:26:15.467937 2180315 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.468170 2180315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:26:15.468250 2180315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:26:15.468266 2180315 certs.go:256] generating profile certs ...
	I0120 16:26:15.468364 2180315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key
	I0120 16:26:15.468390 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt with IP's: []
	I0120 16:26:15.577472 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt ...
	I0120 16:26:15.577509 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.crt: {Name:mk914869e99403fc00f1cc4cad2ac1e0f3ec5551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.577754 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key ...
	I0120 16:26:15.577783 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key: {Name:mkb92bb614ad1cca6b0bdf061440a9ad4a00c5e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.577908 2180315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1
	I0120 16:26:15.577927 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.241]
	I0120 16:26:15.668661 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 ...
	I0120 16:26:15.668711 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1: {Name:mk8ca43e254a5404c4e4ca93c5c33b7ec4ae25d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.668957 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1 ...
	I0120 16:26:15.668994 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1: {Name:mkd7928d3b6f1cd61571f995c138a3935139db8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.669164 2180315 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt.72816fb1 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt
	I0120 16:26:15.669301 2180315 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key
	I0120 16:26:15.669399 2180315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key
	I0120 16:26:15.669437 2180315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt with IP's: []
	I0120 16:26:15.967203 2180315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt ...
	I0120 16:26:15.967244 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt: {Name:mk9c2857e3082a01d8d3c5bec5ce892ccc2ad7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.967440 2180315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key ...
	I0120 16:26:15.967455 2180315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key: {Name:mk4c49d697564a24897bf19dd29d9182642aa2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:26:15.967625 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:26:15.967664 2180315 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:26:15.967676 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:26:15.967699 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:26:15.967723 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:26:15.967744 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:26:15.967781 2180315 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:26:15.968409 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:26:15.996020 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:26:16.024650 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:26:16.052287 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:26:16.080693 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 16:26:16.108290 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:26:16.133710 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:26:16.159665 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 16:26:16.187716 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:26:16.213166 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:26:16.239603 2180315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:26:16.265627 2180315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:26:16.286526 2180315 ssh_runner.go:195] Run: openssl version
	I0120 16:26:16.293672 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:26:16.306832 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.312186 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.312254 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:26:16.319194 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:26:16.339608 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:26:16.361478 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.370929 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.371021 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:26:16.377897 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:26:16.395739 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:26:16.412706 2180315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.419322 2180315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.419420 2180315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:26:16.426849 2180315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:26:16.442098 2180315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:26:16.447038 2180315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:26:16.447118 2180315 kubeadm.go:392] StartCluster: {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:26:16.447228 2180315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:26:16.447310 2180315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:26:16.488184 2180315 cri.go:89] found id: ""
	I0120 16:26:16.488327 2180315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:26:16.499968 2180315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:26:16.510813 2180315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:26:16.521717 2180315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:26:16.521744 2180315 kubeadm.go:157] found existing configuration files:
	
	I0120 16:26:16.521802 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:26:16.532541 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:26:16.532660 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:26:16.544206 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:26:16.554880 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:26:16.554976 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:26:16.567537 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:26:16.579804 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:26:16.579878 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:26:16.592478 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:26:16.604442 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:26:16.604526 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:26:16.616832 2180315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:26:16.743059 2180315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:26:16.743124 2180315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:26:16.887386 2180315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:26:16.887552 2180315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:26:16.887686 2180315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:26:17.092403 2180315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:26:17.095152 2180315 out.go:235]   - Generating certificates and keys ...
	I0120 16:26:17.095265 2180315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:26:17.095400 2180315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:26:17.232982 2180315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:26:17.360580 2180315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:26:17.458945 2180315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:26:17.755642 2180315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:26:17.994212 2180315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:26:17.994402 2180315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	I0120 16:26:18.167059 2180315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:26:18.167305 2180315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	I0120 16:26:18.343031 2180315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:26:18.731988 2180315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:26:19.042242 2180315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:26:19.042344 2180315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:26:19.483495 2180315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:26:19.688174 2180315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:26:19.999937 2180315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:26:20.097904 2180315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:26:20.123004 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:26:20.123402 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:26:20.123476 2180315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:26:20.273327 2180315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:26:20.275346 2180315 out.go:235]   - Booting up control plane ...
	I0120 16:26:20.275493 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:26:20.282943 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:26:20.284334 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:26:20.285288 2180315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:26:20.290986 2180315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:27:00.284539 2180315 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:27:00.285489 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:27:00.285713 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:27:05.286010 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:27:05.286288 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:27:15.285409 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:27:15.285642 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:27:35.285200 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:27:35.285474 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:28:15.287572 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:28:15.287782 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:28:15.287841 2180315 kubeadm.go:310] 
	I0120 16:28:15.287913 2180315 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:28:15.287988 2180315 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:28:15.288003 2180315 kubeadm.go:310] 
	I0120 16:28:15.288029 2180315 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:28:15.288088 2180315 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:28:15.288213 2180315 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:28:15.288224 2180315 kubeadm.go:310] 
	I0120 16:28:15.288379 2180315 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:28:15.288413 2180315 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:28:15.288450 2180315 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:28:15.288463 2180315 kubeadm.go:310] 
	I0120 16:28:15.288604 2180315 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:28:15.288675 2180315 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:28:15.288689 2180315 kubeadm.go:310] 
	I0120 16:28:15.288822 2180315 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:28:15.288925 2180315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:28:15.289048 2180315 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:28:15.289159 2180315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:28:15.289171 2180315 kubeadm.go:310] 
	I0120 16:28:15.289538 2180315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:28:15.289637 2180315 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:28:15.289764 2180315 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 16:28:15.289964 2180315 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-806597] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 16:28:15.290009 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 16:28:17.732807 2180315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.44275482s)
	I0120 16:28:17.732901 2180315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:28:17.747772 2180315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:28:17.757979 2180315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:28:17.758006 2180315 kubeadm.go:157] found existing configuration files:
	
	I0120 16:28:17.758062 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:28:17.767540 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:28:17.767630 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:28:17.777476 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:28:17.786547 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:28:17.786617 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:28:17.796245 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:28:17.805721 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:28:17.805827 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:28:17.816019 2180315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:28:17.825348 2180315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:28:17.825410 2180315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:28:17.835578 2180315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:28:18.044639 2180315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:30:14.312909 2180315 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:30:14.313035 2180315 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:30:14.315200 2180315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:30:14.315283 2180315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:30:14.315368 2180315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:30:14.315446 2180315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:30:14.315524 2180315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:30:14.315575 2180315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:30:14.317295 2180315 out.go:235]   - Generating certificates and keys ...
	I0120 16:30:14.317399 2180315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:30:14.317511 2180315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:30:14.317643 2180315 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 16:30:14.317718 2180315 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 16:30:14.317835 2180315 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 16:30:14.317946 2180315 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 16:30:14.318047 2180315 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 16:30:14.318129 2180315 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 16:30:14.318260 2180315 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 16:30:14.318362 2180315 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 16:30:14.318416 2180315 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 16:30:14.318494 2180315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:30:14.318561 2180315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:30:14.318643 2180315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:30:14.318727 2180315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:30:14.318806 2180315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:30:14.318954 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:30:14.319072 2180315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:30:14.319126 2180315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:30:14.319212 2180315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:30:14.320710 2180315 out.go:235]   - Booting up control plane ...
	I0120 16:30:14.320837 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:30:14.320936 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:30:14.321022 2180315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:30:14.321126 2180315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:30:14.321344 2180315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:30:14.321413 2180315 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:30:14.321508 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:30:14.321779 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:30:14.321881 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:30:14.322162 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:30:14.322276 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:30:14.322488 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:30:14.322576 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:30:14.322869 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:30:14.322969 2180315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:30:14.323211 2180315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:30:14.323224 2180315 kubeadm.go:310] 
	I0120 16:30:14.323293 2180315 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:30:14.323335 2180315 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:30:14.323344 2180315 kubeadm.go:310] 
	I0120 16:30:14.323389 2180315 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:30:14.323423 2180315 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:30:14.323536 2180315 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:30:14.323544 2180315 kubeadm.go:310] 
	I0120 16:30:14.323657 2180315 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:30:14.323713 2180315 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:30:14.323769 2180315 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:30:14.323778 2180315 kubeadm.go:310] 
	I0120 16:30:14.323891 2180315 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:30:14.324023 2180315 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:30:14.324046 2180315 kubeadm.go:310] 
	I0120 16:30:14.324202 2180315 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:30:14.324338 2180315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:30:14.324457 2180315 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:30:14.324588 2180315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:30:14.324643 2180315 kubeadm.go:310] 
	I0120 16:30:14.324694 2180315 kubeadm.go:394] duration metric: took 3m57.877582207s to StartCluster
	I0120 16:30:14.324770 2180315 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:30:14.324860 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:30:14.389563 2180315 cri.go:89] found id: ""
	I0120 16:30:14.389597 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.389608 2180315 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:30:14.389617 2180315 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:30:14.389700 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:30:14.443626 2180315 cri.go:89] found id: ""
	I0120 16:30:14.443665 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.443677 2180315 logs.go:284] No container was found matching "etcd"
	I0120 16:30:14.443685 2180315 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:30:14.443777 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:30:14.493330 2180315 cri.go:89] found id: ""
	I0120 16:30:14.493369 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.493382 2180315 logs.go:284] No container was found matching "coredns"
	I0120 16:30:14.493392 2180315 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:30:14.493474 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:30:14.535826 2180315 cri.go:89] found id: ""
	I0120 16:30:14.535863 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.535875 2180315 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:30:14.535883 2180315 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:30:14.535971 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:30:14.586146 2180315 cri.go:89] found id: ""
	I0120 16:30:14.586185 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.586196 2180315 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:30:14.586205 2180315 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:30:14.586281 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:30:14.629186 2180315 cri.go:89] found id: ""
	I0120 16:30:14.629215 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.629224 2180315 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:30:14.629231 2180315 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:30:14.629307 2180315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:30:14.670583 2180315 cri.go:89] found id: ""
	I0120 16:30:14.670631 2180315 logs.go:282] 0 containers: []
	W0120 16:30:14.670642 2180315 logs.go:284] No container was found matching "kindnet"
	I0120 16:30:14.670657 2180315 logs.go:123] Gathering logs for container status ...
	I0120 16:30:14.670683 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:30:14.735305 2180315 logs.go:123] Gathering logs for kubelet ...
	I0120 16:30:14.735350 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:30:14.816373 2180315 logs.go:123] Gathering logs for dmesg ...
	I0120 16:30:14.816424 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:30:14.838060 2180315 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:30:14.838112 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:30:15.061156 2180315 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:30:15.061194 2180315 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:30:15.061214 2180315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0120 16:30:15.219738 2180315 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 16:30:15.219923 2180315 out.go:270] * 
	* 
	W0120 16:30:15.220112 2180315 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:30:15.220184 2180315 out.go:270] * 
	* 
	W0120 16:30:15.221714 2180315 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:30:15.225912 2180315 out.go:201] 
	W0120 16:30:15.227397 2180315 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:30:15.227501 2180315 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 16:30:15.227542 2180315 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 16:30:15.229269 2180315 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 6 (263.693824ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:30:15.565748 2183878 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-806597" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-806597" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (309.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1600.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-429406 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-429406 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (26m38.069307452s)

                                                
                                                
-- stdout --
	* [embed-certs-429406] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-429406" primary control-plane node in "embed-certs-429406" cluster
	* Restarting existing kvm2 VM for "embed-certs-429406" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-429406 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:29:25.747858 2183212 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:29:25.747970 2183212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:29:25.747982 2183212 out.go:358] Setting ErrFile to fd 2...
	I0120 16:29:25.747987 2183212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:29:25.748208 2183212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:29:25.748836 2183212 out.go:352] Setting JSON to false
	I0120 16:29:25.749986 2183212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29512,"bootTime":1737361054,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:29:25.750093 2183212 start.go:139] virtualization: kvm guest
	I0120 16:29:25.753275 2183212 out.go:177] * [embed-certs-429406] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:29:25.754831 2183212 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:29:25.754839 2183212 notify.go:220] Checking for updates...
	I0120 16:29:25.756441 2183212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:29:25.758015 2183212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:29:25.759578 2183212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:29:25.761160 2183212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:29:25.762530 2183212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:29:25.764270 2183212 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:29:25.764850 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:29:25.764907 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:29:25.781183 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I0120 16:29:25.781699 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:29:25.782310 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:29:25.782341 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:29:25.782712 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:29:25.782977 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:25.783310 2183212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:29:25.783698 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:29:25.783744 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:29:25.799132 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0120 16:29:25.799534 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:29:25.800042 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:29:25.800066 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:29:25.800443 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:29:25.800623 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:25.837888 2183212 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 16:29:25.839149 2183212 start.go:297] selected driver: kvm2
	I0120 16:29:25.839171 2183212 start.go:901] validating driver "kvm2" against &{Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:29:25.839290 2183212 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:29:25.839981 2183212 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:29:25.840088 2183212 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:29:25.856088 2183212 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:29:25.856545 2183212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:29:25.856589 2183212 cni.go:84] Creating CNI manager for ""
	I0120 16:29:25.856653 2183212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:29:25.856711 2183212 start.go:340] cluster config:
	{Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:29:25.856837 2183212 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:29:25.858696 2183212 out.go:177] * Starting "embed-certs-429406" primary control-plane node in "embed-certs-429406" cluster
	I0120 16:29:25.859884 2183212 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:29:25.859943 2183212 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:29:25.859956 2183212 cache.go:56] Caching tarball of preloaded images
	I0120 16:29:25.860099 2183212 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:29:25.860115 2183212 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:29:25.860240 2183212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/config.json ...
	I0120 16:29:25.860463 2183212 start.go:360] acquireMachinesLock for embed-certs-429406: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:29:25.860524 2183212 start.go:364] duration metric: took 39.104µs to acquireMachinesLock for "embed-certs-429406"
	I0120 16:29:25.860547 2183212 start.go:96] Skipping create...Using existing machine configuration
	I0120 16:29:25.860555 2183212 fix.go:54] fixHost starting: 
	I0120 16:29:25.860821 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:29:25.860865 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:29:25.876254 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0120 16:29:25.876818 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:29:25.877295 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:29:25.877321 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:29:25.877677 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:29:25.877913 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:25.878058 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:29:25.879724 2183212 fix.go:112] recreateIfNeeded on embed-certs-429406: state=Stopped err=<nil>
	I0120 16:29:25.879750 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	W0120 16:29:25.879891 2183212 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 16:29:25.881926 2183212 out.go:177] * Restarting existing kvm2 VM for "embed-certs-429406" ...
	I0120 16:29:25.883112 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Start
	I0120 16:29:25.883324 2183212 main.go:141] libmachine: (embed-certs-429406) starting domain...
	I0120 16:29:25.883344 2183212 main.go:141] libmachine: (embed-certs-429406) ensuring networks are active...
	I0120 16:29:25.884158 2183212 main.go:141] libmachine: (embed-certs-429406) Ensuring network default is active
	I0120 16:29:25.884557 2183212 main.go:141] libmachine: (embed-certs-429406) Ensuring network mk-embed-certs-429406 is active
	I0120 16:29:25.884956 2183212 main.go:141] libmachine: (embed-certs-429406) getting domain XML...
	I0120 16:29:25.885833 2183212 main.go:141] libmachine: (embed-certs-429406) creating domain...
	I0120 16:29:27.125102 2183212 main.go:141] libmachine: (embed-certs-429406) waiting for IP...
	I0120 16:29:27.126002 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:27.126461 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:27.126587 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:27.126475 2183247 retry.go:31] will retry after 227.033404ms: waiting for domain to come up
	I0120 16:29:27.355231 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:27.355820 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:27.355873 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:27.355802 2183247 retry.go:31] will retry after 314.944309ms: waiting for domain to come up
	I0120 16:29:27.672341 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:27.672840 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:27.672874 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:27.672780 2183247 retry.go:31] will retry after 468.351946ms: waiting for domain to come up
	I0120 16:29:28.142414 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:28.142922 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:28.142951 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:28.142891 2183247 retry.go:31] will retry after 507.306144ms: waiting for domain to come up
	I0120 16:29:28.651706 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:28.652340 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:28.652372 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:28.652298 2183247 retry.go:31] will retry after 512.701146ms: waiting for domain to come up
	I0120 16:29:29.167328 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:29.167875 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:29.167905 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:29.167843 2183247 retry.go:31] will retry after 676.529322ms: waiting for domain to come up
	I0120 16:29:29.845836 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:29.846341 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:29.846405 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:29.846301 2183247 retry.go:31] will retry after 810.645296ms: waiting for domain to come up
	I0120 16:29:30.658527 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:30.659025 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:30.659069 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:30.658992 2183247 retry.go:31] will retry after 1.286293553s: waiting for domain to come up
	I0120 16:29:32.069729 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:32.070398 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:32.070430 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:32.070346 2183247 retry.go:31] will retry after 1.353437081s: waiting for domain to come up
	I0120 16:29:33.425906 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:33.426313 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:33.426344 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:33.426278 2183247 retry.go:31] will retry after 1.98596665s: waiting for domain to come up
	I0120 16:29:35.414674 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:35.415240 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:35.415268 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:35.415209 2183247 retry.go:31] will retry after 1.864902032s: waiting for domain to come up
	I0120 16:29:37.282109 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:37.282625 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:37.282687 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:37.282628 2183247 retry.go:31] will retry after 2.698098949s: waiting for domain to come up
	I0120 16:29:39.983756 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:39.984285 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | unable to find current IP address of domain embed-certs-429406 in network mk-embed-certs-429406
	I0120 16:29:39.984307 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | I0120 16:29:39.984248 2183247 retry.go:31] will retry after 3.112860598s: waiting for domain to come up
	I0120 16:29:43.100263 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.100689 2183212 main.go:141] libmachine: (embed-certs-429406) found domain IP: 192.168.61.123
	I0120 16:29:43.100709 2183212 main.go:141] libmachine: (embed-certs-429406) reserving static IP address...
	I0120 16:29:43.100722 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has current primary IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.101121 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "embed-certs-429406", mac: "52:54:00:90:f9:e0", ip: "192.168.61.123"} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.101146 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | skip adding static IP to network mk-embed-certs-429406 - found existing host DHCP lease matching {name: "embed-certs-429406", mac: "52:54:00:90:f9:e0", ip: "192.168.61.123"}
	I0120 16:29:43.101161 2183212 main.go:141] libmachine: (embed-certs-429406) reserved static IP address 192.168.61.123 for domain embed-certs-429406
	I0120 16:29:43.101176 2183212 main.go:141] libmachine: (embed-certs-429406) waiting for SSH...
	I0120 16:29:43.101188 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Getting to WaitForSSH function...
	I0120 16:29:43.103167 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.103430 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.103475 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.103548 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Using SSH client type: external
	I0120 16:29:43.103571 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa (-rw-------)
	I0120 16:29:43.103620 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:29:43.103643 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | About to run SSH command:
	I0120 16:29:43.103661 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | exit 0
	I0120 16:29:43.227030 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | SSH cmd err, output: <nil>: 
	I0120 16:29:43.227536 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetConfigRaw
	I0120 16:29:43.228213 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetIP
	I0120 16:29:43.231167 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.231592 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.231623 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.231862 2183212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/config.json ...
	I0120 16:29:43.232121 2183212 machine.go:93] provisionDockerMachine start ...
	I0120 16:29:43.232144 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:43.232404 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.235224 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.235668 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.235703 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.235907 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:43.236134 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.236298 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.236461 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:43.236658 2183212 main.go:141] libmachine: Using SSH client type: native
	I0120 16:29:43.236917 2183212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0120 16:29:43.236933 2183212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 16:29:43.343779 2183212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 16:29:43.343812 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetMachineName
	I0120 16:29:43.344077 2183212 buildroot.go:166] provisioning hostname "embed-certs-429406"
	I0120 16:29:43.344109 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetMachineName
	I0120 16:29:43.344355 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.347140 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.347534 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.347579 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.347632 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:43.347827 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.348043 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.348202 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:43.348409 2183212 main.go:141] libmachine: Using SSH client type: native
	I0120 16:29:43.348599 2183212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0120 16:29:43.348614 2183212 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-429406 && echo "embed-certs-429406" | sudo tee /etc/hostname
	I0120 16:29:43.472733 2183212 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-429406
	
	I0120 16:29:43.472771 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.475678 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.476025 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.476055 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.476283 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:43.476491 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.476666 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.476812 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:43.476997 2183212 main.go:141] libmachine: Using SSH client type: native
	I0120 16:29:43.477180 2183212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0120 16:29:43.477196 2183212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-429406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-429406/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-429406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:29:43.592027 2183212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:29:43.592061 2183212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:29:43.592148 2183212 buildroot.go:174] setting up certificates
	I0120 16:29:43.592162 2183212 provision.go:84] configureAuth start
	I0120 16:29:43.592183 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetMachineName
	I0120 16:29:43.592453 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetIP
	I0120 16:29:43.595239 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.595544 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.595570 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.595687 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.598046 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.598556 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.598625 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.598716 2183212 provision.go:143] copyHostCerts
	I0120 16:29:43.598771 2183212 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:29:43.598789 2183212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:29:43.598858 2183212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:29:43.598960 2183212 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:29:43.598969 2183212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:29:43.598993 2183212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:29:43.599056 2183212 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:29:43.599063 2183212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:29:43.599084 2183212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:29:43.599132 2183212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.embed-certs-429406 san=[127.0.0.1 192.168.61.123 embed-certs-429406 localhost minikube]
	I0120 16:29:43.695038 2183212 provision.go:177] copyRemoteCerts
	I0120 16:29:43.695113 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:29:43.695141 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.697658 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.698069 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.698095 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.698362 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:43.698550 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.698723 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:43.698861 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:29:43.781852 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:29:43.807378 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 16:29:43.832629 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:29:43.857184 2183212 provision.go:87] duration metric: took 264.996632ms to configureAuth
	I0120 16:29:43.857223 2183212 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:29:43.857491 2183212 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:29:43.857576 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:43.860449 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.860832 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:43.860875 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:43.861004 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:43.861208 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.861365 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:43.861516 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:43.861662 2183212 main.go:141] libmachine: Using SSH client type: native
	I0120 16:29:43.861941 2183212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0120 16:29:43.861968 2183212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:29:44.092712 2183212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:29:44.092741 2183212 machine.go:96] duration metric: took 860.604506ms to provisionDockerMachine
	I0120 16:29:44.092757 2183212 start.go:293] postStartSetup for "embed-certs-429406" (driver="kvm2")
	I0120 16:29:44.092790 2183212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:29:44.092819 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:44.093180 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:29:44.093212 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:44.640973 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.641335 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:44.641379 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.641515 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:44.641705 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:44.641833 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:44.641948 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:29:44.730233 2183212 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:29:44.734797 2183212 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:29:44.734826 2183212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:29:44.734893 2183212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:29:44.734989 2183212 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:29:44.735111 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:29:44.745755 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:29:44.773185 2183212 start.go:296] duration metric: took 680.412935ms for postStartSetup
	I0120 16:29:44.773227 2183212 fix.go:56] duration metric: took 18.91267288s for fixHost
	I0120 16:29:44.773252 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:44.776205 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.776671 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:44.776710 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.776946 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:44.777158 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:44.777362 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:44.777533 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:44.777737 2183212 main.go:141] libmachine: Using SSH client type: native
	I0120 16:29:44.777978 2183212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0120 16:29:44.777996 2183212 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:29:44.883681 2183212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390584.854007030
	
	I0120 16:29:44.883706 2183212 fix.go:216] guest clock: 1737390584.854007030
	I0120 16:29:44.883724 2183212 fix.go:229] Guest: 2025-01-20 16:29:44.85400703 +0000 UTC Remote: 2025-01-20 16:29:44.773231599 +0000 UTC m=+19.066897403 (delta=80.775431ms)
	I0120 16:29:44.883751 2183212 fix.go:200] guest clock delta is within tolerance: 80.775431ms
	I0120 16:29:44.883770 2183212 start.go:83] releasing machines lock for "embed-certs-429406", held for 19.023222567s
	I0120 16:29:44.883810 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:44.884093 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetIP
	I0120 16:29:44.887470 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.887864 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:44.887892 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.888087 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:44.888580 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:44.888753 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:29:44.888859 2183212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:29:44.888904 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:44.888975 2183212 ssh_runner.go:195] Run: cat /version.json
	I0120 16:29:44.888999 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:29:44.891748 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.892060 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.892161 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:44.892195 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.892354 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:44.892524 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:44.892541 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:44.892555 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:44.892677 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:29:44.892734 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:44.893012 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:29:44.893030 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:29:44.893159 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:29:44.893337 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:29:45.011550 2183212 ssh_runner.go:195] Run: systemctl --version
	I0120 16:29:45.018073 2183212 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:29:45.170523 2183212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:29:45.178144 2183212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:29:45.178214 2183212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:29:45.200495 2183212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:29:45.200535 2183212 start.go:495] detecting cgroup driver to use...
	I0120 16:29:45.200600 2183212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:29:45.221357 2183212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:29:45.235904 2183212 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:29:45.235986 2183212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:29:45.251298 2183212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:29:45.266086 2183212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:29:45.378519 2183212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:29:45.548708 2183212 docker.go:233] disabling docker service ...
	I0120 16:29:45.548767 2183212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:29:45.568278 2183212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:29:45.582790 2183212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:29:45.704999 2183212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:29:45.825743 2183212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:29:45.843607 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:29:45.869153 2183212 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:29:45.869222 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.885115 2183212 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:29:45.885249 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.898326 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.911479 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.924585 2183212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:29:45.937268 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.951051 2183212 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.970720 2183212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:29:45.983026 2183212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:29:45.994202 2183212 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:29:45.994304 2183212 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:29:46.009470 2183212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:29:46.020184 2183212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:29:46.149236 2183212 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:29:46.257178 2183212 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:29:46.257272 2183212 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:29:46.262825 2183212 start.go:563] Will wait 60s for crictl version
	I0120 16:29:46.262910 2183212 ssh_runner.go:195] Run: which crictl
	I0120 16:29:46.267494 2183212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:29:46.309170 2183212 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:29:46.309276 2183212 ssh_runner.go:195] Run: crio --version
	I0120 16:29:46.339999 2183212 ssh_runner.go:195] Run: crio --version
	I0120 16:29:46.372697 2183212 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:29:46.374030 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetIP
	I0120 16:29:46.377218 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:46.377621 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:29:46.377656 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:29:46.377883 2183212 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 16:29:46.382689 2183212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:29:46.396840 2183212 kubeadm.go:883] updating cluster {Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:29:46.397002 2183212 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:29:46.397069 2183212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:29:46.447602 2183212 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:29:46.447694 2183212 ssh_runner.go:195] Run: which lz4
	I0120 16:29:46.452568 2183212 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:29:46.457986 2183212 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:29:46.458027 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:29:48.067378 2183212 crio.go:462] duration metric: took 1.614838614s to copy over tarball
	I0120 16:29:48.067484 2183212 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:29:50.305295 2183212 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.237771794s)
	I0120 16:29:50.305343 2183212 crio.go:469] duration metric: took 2.237924592s to extract the tarball
	I0120 16:29:50.305354 2183212 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:29:50.345495 2183212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:29:50.392999 2183212 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:29:50.393037 2183212 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:29:50.393048 2183212 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.32.0 crio true true} ...
	I0120 16:29:50.393189 2183212 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-429406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:29:50.393313 2183212 ssh_runner.go:195] Run: crio config
	I0120 16:29:50.448733 2183212 cni.go:84] Creating CNI manager for ""
	I0120 16:29:50.448756 2183212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:29:50.448766 2183212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:29:50.448787 2183212 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-429406 NodeName:embed-certs-429406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:29:50.448917 2183212 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-429406"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:29:50.448985 2183212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:29:50.460237 2183212 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:29:50.460317 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:29:50.471037 2183212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0120 16:29:50.490060 2183212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:29:50.509260 2183212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0120 16:29:50.529622 2183212 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0120 16:29:50.534465 2183212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:29:50.548212 2183212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:29:50.673064 2183212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:29:50.692800 2183212 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406 for IP: 192.168.61.123
	I0120 16:29:50.692830 2183212 certs.go:194] generating shared ca certs ...
	I0120 16:29:50.692854 2183212 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:29:50.693051 2183212 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:29:50.693110 2183212 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:29:50.693122 2183212 certs.go:256] generating profile certs ...
	I0120 16:29:50.693250 2183212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/client.key
	I0120 16:29:50.693336 2183212 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/apiserver.key.b798af56
	I0120 16:29:50.693393 2183212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/proxy-client.key
	I0120 16:29:50.693547 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:29:50.693588 2183212 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:29:50.693624 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:29:50.693677 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:29:50.693714 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:29:50.693744 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:29:50.693802 2183212 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:29:50.694720 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:29:50.724367 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:29:50.757088 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:29:50.797819 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:29:50.841098 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 16:29:50.880298 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:29:50.911197 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:29:50.938471 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/embed-certs-429406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:29:50.966458 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:29:51.000306 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:29:51.026552 2183212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:29:51.052693 2183212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:29:51.071032 2183212 ssh_runner.go:195] Run: openssl version
	I0120 16:29:51.077480 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:29:51.088795 2183212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:29:51.093839 2183212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:29:51.093913 2183212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:29:51.100754 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:29:51.112346 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:29:51.123979 2183212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:29:51.129006 2183212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:29:51.129092 2183212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:29:51.135237 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:29:51.146895 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:29:51.158912 2183212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:29:51.164070 2183212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:29:51.164152 2183212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:29:51.170378 2183212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:29:51.181900 2183212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:29:51.187263 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:29:51.194003 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:29:51.200866 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:29:51.207826 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:29:51.214837 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:29:51.221792 2183212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:29:51.231302 2183212 kubeadm.go:392] StartCluster: {Name:embed-certs-429406 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-429406 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:29:51.231451 2183212 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:29:51.231530 2183212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:29:51.277127 2183212 cri.go:89] found id: ""
	I0120 16:29:51.277218 2183212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:29:51.288141 2183212 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 16:29:51.288175 2183212 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 16:29:51.288239 2183212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 16:29:51.298849 2183212 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 16:29:51.299487 2183212 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-429406" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:29:51.299663 2183212 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2129584/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-429406" cluster setting kubeconfig missing "embed-certs-429406" context setting]
	I0120 16:29:51.299938 2183212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:29:51.325369 2183212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 16:29:51.336806 2183212 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0120 16:29:51.336848 2183212 kubeadm.go:1160] stopping kube-system containers ...
	I0120 16:29:51.336865 2183212 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 16:29:51.336923 2183212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:29:51.382529 2183212 cri.go:89] found id: ""
	I0120 16:29:51.382634 2183212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 16:29:51.400908 2183212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:29:51.412003 2183212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:29:51.412026 2183212 kubeadm.go:157] found existing configuration files:
	
	I0120 16:29:51.412090 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:29:51.421772 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:29:51.421848 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:29:51.432149 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:29:51.442737 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:29:51.442820 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:29:51.453230 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:29:51.464027 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:29:51.464104 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:29:51.474272 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:29:51.484965 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:29:51.485037 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:29:51.495273 2183212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:29:51.505520 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:29:51.642753 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:29:52.682622 2183212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.039810084s)
	I0120 16:29:52.682670 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:29:52.906394 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:29:52.982432 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:29:53.063735 2183212 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:29:53.063837 2183212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:29:53.564606 2183212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:29:54.064053 2183212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:29:54.080878 2183212 api_server.go:72] duration metric: took 1.017142782s to wait for apiserver process to appear ...
	I0120 16:29:54.080917 2183212 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:29:54.080943 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:29:59.083577 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0120 16:29:59.083636 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:04.084546 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0120 16:30:04.084677 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:09.086752 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0120 16:30:09.086822 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:14.087856 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0120 16:30:14.087932 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:14.427276 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": read tcp 192.168.61.1:37070->192.168.61.123:8443: read: connection reset by peer
	I0120 16:30:14.581808 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:14.582783 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": dial tcp 192.168.61.123:8443: connect: connection refused
	I0120 16:30:15.081989 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:15.082972 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": dial tcp 192.168.61.123:8443: connect: connection refused
	I0120 16:30:15.581716 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:15.582430 2183212 api_server.go:269] stopped: https://192.168.61.123:8443/healthz: Get "https://192.168.61.123:8443/healthz": dial tcp 192.168.61.123:8443: connect: connection refused
	I0120 16:30:16.081832 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:18.881747 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 16:30:18.881785 2183212 api_server.go:103] status: https://192.168.61.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 16:30:18.881804 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:18.900301 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 16:30:18.900335 2183212 api_server.go:103] status: https://192.168.61.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 16:30:19.081740 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:19.088162 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 16:30:19.088196 2183212 api_server.go:103] status: https://192.168.61.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 16:30:19.581875 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:19.588636 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 16:30:19.588667 2183212 api_server.go:103] status: https://192.168.61.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 16:30:20.081381 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:30:20.087534 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 200:
	ok
	I0120 16:30:20.095643 2183212 api_server.go:141] control plane version: v1.32.0
	I0120 16:30:20.095679 2183212 api_server.go:131] duration metric: took 26.014753203s to wait for apiserver health ...
	I0120 16:30:20.095693 2183212 cni.go:84] Creating CNI manager for ""
	I0120 16:30:20.095702 2183212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:30:20.201781 2183212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:30:20.204240 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:30:20.217131 2183212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:30:20.237108 2183212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:30:20.248729 2183212 system_pods.go:59] 8 kube-system pods found
	I0120 16:30:20.248779 2183212 system_pods.go:61] "coredns-668d6bf9bc-fhmws" [1f0f33f9-0111-4d5a-8ebc-a5f713eaf74f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 16:30:20.248787 2183212 system_pods.go:61] "etcd-embed-certs-429406" [870e02d4-af82-4462-9709-bd2232ff1d2c] Running
	I0120 16:30:20.248793 2183212 system_pods.go:61] "kube-apiserver-embed-certs-429406" [65e8b735-e4b2-4112-9158-738b7a2ade10] Running
	I0120 16:30:20.248798 2183212 system_pods.go:61] "kube-controller-manager-embed-certs-429406" [38ba2f4e-eb21-48d1-bb42-8bd10561c9ed] Running
	I0120 16:30:20.248801 2183212 system_pods.go:61] "kube-proxy-ccck5" [9693c753-f071-4704-be3e-927917a05de8] Running
	I0120 16:30:20.248804 2183212 system_pods.go:61] "kube-scheduler-embed-certs-429406" [fc47106a-dbff-4b38-a501-d15059c2b83b] Running
	I0120 16:30:20.248815 2183212 system_pods.go:61] "metrics-server-f79f97bbb-8zvm2" [6ebaa9c7-cf5e-4c07-a20f-00ecb1f5eaa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 16:30:20.248823 2183212 system_pods.go:61] "storage-provisioner" [90605d5e-07fa-486b-88f2-a53af1f4dd3e] Running
	I0120 16:30:20.248829 2183212 system_pods.go:74] duration metric: took 11.691776ms to wait for pod list to return data ...
	I0120 16:30:20.248840 2183212 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:30:20.253907 2183212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:30:20.253944 2183212 node_conditions.go:123] node cpu capacity is 2
	I0120 16:30:20.253961 2183212 node_conditions.go:105] duration metric: took 5.115932ms to run NodePressure ...
	I0120 16:30:20.253987 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:30:20.526793 2183212 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 16:30:20.532725 2183212 retry.go:31] will retry after 203.092245ms: kubelet not initialised
	I0120 16:30:20.740428 2183212 retry.go:31] will retry after 539.559407ms: kubelet not initialised
	I0120 16:30:21.325835 2183212 kubeadm.go:739] kubelet initialised
	I0120 16:30:21.325873 2183212 kubeadm.go:740] duration metric: took 799.044092ms waiting for restarted kubelet to initialise ...
	I0120 16:30:21.325887 2183212 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:30:21.332044 2183212 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:23.381767 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:25.838567 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:27.842884 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:30.341043 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:32.487675 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:34.840304 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:37.341259 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:39.343298 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:41.841864 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:44.338797 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:46.340147 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:48.842541 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:51.340554 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:53.840752 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:55.344745 2183212 pod_ready.go:93] pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.344789 2183212 pod_ready.go:82] duration metric: took 34.012702948s for pod "coredns-668d6bf9bc-fhmws" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.344807 2183212 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.358031 2183212 pod_ready.go:93] pod "etcd-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.358069 2183212 pod_ready.go:82] duration metric: took 13.252722ms for pod "etcd-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.358084 2183212 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.365301 2183212 pod_ready.go:93] pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.365332 2183212 pod_ready.go:82] duration metric: took 7.238772ms for pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.365352 2183212 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.370781 2183212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.370810 2183212 pod_ready.go:82] duration metric: took 5.448499ms for pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.370825 2183212 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ccck5" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.376603 2183212 pod_ready.go:93] pod "kube-proxy-ccck5" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.376633 2183212 pod_ready.go:82] duration metric: took 5.799436ms for pod "kube-proxy-ccck5" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.376647 2183212 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.736571 2183212 pod_ready.go:93] pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:30:55.736604 2183212 pod_ready.go:82] duration metric: took 359.948369ms for pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:55.736616 2183212 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace to be "Ready" ...
	I0120 16:30:57.743963 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:30:59.745262 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:02.243921 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:04.244771 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:06.744103 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:09.245225 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:11.743577 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:14.243795 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:16.743442 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:19.243891 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:21.743421 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:23.744182 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:26.243346 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:28.744371 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:31.249473 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:33.745238 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:35.745864 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:38.243362 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:40.243963 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:42.743220 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:44.744413 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:46.744496 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:49.243696 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:51.243842 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:53.245572 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:55.744143 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:31:57.744380 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:00.243816 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:02.743576 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:04.744768 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:07.244205 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:09.744053 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:11.747659 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:14.244154 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:16.244716 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:18.744113 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:21.243778 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:23.742967 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:25.743108 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:27.743895 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:29.745144 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:32.293465 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:34.744274 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:37.245299 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:39.743686 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:41.745828 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:44.243793 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:46.243954 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:48.244602 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:50.244651 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:52.744094 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:55.243583 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:57.244789 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:32:59.742877 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:01.745543 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:04.243529 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:06.244156 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:08.743441 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:11.243551 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:13.743785 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:16.248971 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:18.744163 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:21.244516 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:23.742582 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:25.743086 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:28.242524 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:30.242661 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:32.245328 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:34.742973 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:36.743797 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:39.243156 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:41.243195 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:43.244349 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:45.743886 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:47.744703 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:50.244692 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:52.745727 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:55.243402 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:57.244477 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:33:59.743995 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:02.243480 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:04.243973 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:06.245206 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:08.743978 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:10.744027 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:13.243435 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:15.244071 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:17.248262 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:19.744298 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:22.242982 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:24.244152 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:26.743680 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:28.744588 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:31.246141 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:33.745036 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:36.243947 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:38.743245 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:40.744958 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:43.245570 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:45.742685 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:47.743818 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:49.744948 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:52.244693 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:54.742623 2183212 pod_ready.go:103] pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace has status "Ready":"False"
	I0120 16:34:55.736897 2183212 pod_ready.go:82] duration metric: took 4m0.000244734s for pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace to be "Ready" ...
	E0120 16:34:55.736946 2183212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8zvm2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 16:34:55.736971 2183212 pod_ready.go:39] duration metric: took 4m34.411071287s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:34:55.737004 2183212 kubeadm.go:597] duration metric: took 5m4.448822649s to restartPrimaryControlPlane
	W0120 16:34:55.737105 2183212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 16:34:55.737136 2183212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 16:35:23.916545 2183212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.179377279s)
	I0120 16:35:23.916635 2183212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:35:23.938022 2183212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:35:23.961144 2183212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:35:23.975078 2183212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:35:23.975111 2183212 kubeadm.go:157] found existing configuration files:
	
	I0120 16:35:23.975176 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:35:24.005333 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:35:24.005413 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:35:24.017907 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:35:24.036884 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:35:24.036959 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:35:24.056253 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:35:24.066855 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:35:24.066927 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:35:24.077759 2183212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:35:24.088799 2183212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:35:24.088871 2183212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:35:24.100805 2183212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:35:24.267618 2183212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:35:32.668821 2183212 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:35:32.668886 2183212 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:35:32.668946 2183212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:35:32.669090 2183212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:35:32.669255 2183212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:35:32.669346 2183212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:35:32.671148 2183212 out.go:235]   - Generating certificates and keys ...
	I0120 16:35:32.671243 2183212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:35:32.671333 2183212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:35:32.671402 2183212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 16:35:32.671486 2183212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 16:35:32.671586 2183212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 16:35:32.671659 2183212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 16:35:32.671715 2183212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 16:35:32.671770 2183212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 16:35:32.671829 2183212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 16:35:32.671914 2183212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 16:35:32.671985 2183212 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 16:35:32.672063 2183212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:35:32.672163 2183212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:35:32.672250 2183212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:35:32.672327 2183212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:35:32.672429 2183212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:35:32.672514 2183212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:35:32.672622 2183212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:35:32.672714 2183212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:35:32.674381 2183212 out.go:235]   - Booting up control plane ...
	I0120 16:35:32.674490 2183212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:35:32.674575 2183212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:35:32.674672 2183212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:35:32.674783 2183212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:35:32.674867 2183212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:35:32.674905 2183212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:35:32.675032 2183212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:35:32.675135 2183212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:35:32.675229 2183212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001694693s
	I0120 16:35:32.675329 2183212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:35:32.675423 2183212 kubeadm.go:310] [api-check] The API server is healthy after 5.00890949s
	I0120 16:35:32.675576 2183212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:35:32.675742 2183212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:35:32.675823 2183212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:35:32.676074 2183212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-429406 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:35:32.676155 2183212 kubeadm.go:310] [bootstrap-token] Using token: docsfd.e4bhny6w4h5zu02g
	I0120 16:35:32.678057 2183212 out.go:235]   - Configuring RBAC rules ...
	I0120 16:35:32.678180 2183212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:35:32.678251 2183212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:35:32.678395 2183212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:35:32.678524 2183212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:35:32.678669 2183212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:35:32.678753 2183212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:35:32.678872 2183212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:35:32.678913 2183212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:35:32.678971 2183212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:35:32.678980 2183212 kubeadm.go:310] 
	I0120 16:35:32.679027 2183212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:35:32.679034 2183212 kubeadm.go:310] 
	I0120 16:35:32.679102 2183212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:35:32.679106 2183212 kubeadm.go:310] 
	I0120 16:35:32.679125 2183212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:35:32.679180 2183212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:35:32.679227 2183212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:35:32.679234 2183212 kubeadm.go:310] 
	I0120 16:35:32.679300 2183212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:35:32.679308 2183212 kubeadm.go:310] 
	I0120 16:35:32.679353 2183212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:35:32.679360 2183212 kubeadm.go:310] 
	I0120 16:35:32.679408 2183212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:35:32.679496 2183212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:35:32.679600 2183212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:35:32.679614 2183212 kubeadm.go:310] 
	I0120 16:35:32.679744 2183212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:35:32.679848 2183212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:35:32.679855 2183212 kubeadm.go:310] 
	I0120 16:35:32.679919 2183212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token docsfd.e4bhny6w4h5zu02g \
	I0120 16:35:32.680000 2183212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:35:32.680019 2183212 kubeadm.go:310] 	--control-plane 
	I0120 16:35:32.680024 2183212 kubeadm.go:310] 
	I0120 16:35:32.680094 2183212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:35:32.680104 2183212 kubeadm.go:310] 
	I0120 16:35:32.680170 2183212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token docsfd.e4bhny6w4h5zu02g \
	I0120 16:35:32.680269 2183212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:35:32.680281 2183212 cni.go:84] Creating CNI manager for ""
	I0120 16:35:32.680290 2183212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:35:32.681861 2183212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:35:32.683290 2183212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:35:32.694254 2183212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:35:32.713435 2183212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:35:32.713540 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:32.713563 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-429406 minikube.k8s.io/updated_at=2025_01_20T16_35_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=embed-certs-429406 minikube.k8s.io/primary=true
	I0120 16:35:33.023204 2183212 ops.go:34] apiserver oom_adj: -16
	I0120 16:35:33.023320 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:33.523415 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:34.023577 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:34.523833 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:35.023580 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:35.524440 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:36.023619 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:36.524380 2183212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:35:36.639715 2183212 kubeadm.go:1113] duration metric: took 3.926242068s to wait for elevateKubeSystemPrivileges
	I0120 16:35:36.639758 2183212 kubeadm.go:394] duration metric: took 5m45.408466465s to StartCluster
	I0120 16:35:36.639784 2183212 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:35:36.639883 2183212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:35:36.641470 2183212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:35:36.641747 2183212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:35:36.641889 2183212 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:35:36.641975 2183212 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:35:36.642017 2183212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-429406"
	I0120 16:35:36.642001 2183212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-429406"
	I0120 16:35:36.642032 2183212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-429406"
	I0120 16:35:36.642046 2183212 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-429406"
	W0120 16:35:36.642057 2183212 addons.go:247] addon storage-provisioner should already be in state true
	I0120 16:35:36.642055 2183212 addons.go:69] Setting dashboard=true in profile "embed-certs-429406"
	I0120 16:35:36.642080 2183212 addons.go:238] Setting addon dashboard=true in "embed-certs-429406"
	I0120 16:35:36.642088 2183212 host.go:66] Checking if "embed-certs-429406" exists ...
	W0120 16:35:36.642092 2183212 addons.go:247] addon dashboard should already be in state true
	I0120 16:35:36.642104 2183212 addons.go:69] Setting metrics-server=true in profile "embed-certs-429406"
	I0120 16:35:36.642126 2183212 host.go:66] Checking if "embed-certs-429406" exists ...
	I0120 16:35:36.642130 2183212 addons.go:238] Setting addon metrics-server=true in "embed-certs-429406"
	W0120 16:35:36.642139 2183212 addons.go:247] addon metrics-server should already be in state true
	I0120 16:35:36.642171 2183212 host.go:66] Checking if "embed-certs-429406" exists ...
	I0120 16:35:36.642422 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.642441 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.642452 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.642473 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.642503 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.642517 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.642542 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.642690 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.643780 2183212 out.go:177] * Verifying Kubernetes components...
	I0120 16:35:36.645388 2183212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:35:36.658782 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0120 16:35:36.659066 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0120 16:35:36.659233 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.659543 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.659758 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.659785 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.660090 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.660114 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.660196 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.660472 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.660774 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.660829 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.661009 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0120 16:35:36.661084 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.661125 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.661424 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.661848 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38667
	I0120 16:35:36.661933 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.661947 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.662247 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.662573 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.662768 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.662788 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:35:36.662826 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.663194 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.663788 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.663837 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.666388 2183212 addons.go:238] Setting addon default-storageclass=true in "embed-certs-429406"
	W0120 16:35:36.666408 2183212 addons.go:247] addon default-storageclass should already be in state true
	I0120 16:35:36.666440 2183212 host.go:66] Checking if "embed-certs-429406" exists ...
	I0120 16:35:36.666780 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.666823 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.678185 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0120 16:35:36.678629 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.679207 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.679230 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.679634 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.679816 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:35:36.680831 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0120 16:35:36.681085 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0120 16:35:36.681331 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.681805 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:35:36.682434 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.682975 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0120 16:35:36.683182 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.683218 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.683300 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.683658 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.683714 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.683722 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.683897 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.683915 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.684113 2183212 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 16:35:36.684230 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.684320 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.684633 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:35:36.684981 2183212 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:35:36.685037 2183212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:35:36.685326 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:35:36.686734 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:35:36.687003 2183212 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 16:35:36.687322 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:35:36.688700 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 16:35:36.688712 2183212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:35:36.688761 2183212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 16:35:36.688715 2183212 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 16:35:36.688851 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:35:36.690124 2183212 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:35:36.690139 2183212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:35:36.690154 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:35:36.690194 2183212 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 16:35:36.690221 2183212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 16:35:36.690263 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:35:36.693121 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.693887 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:35:36.693919 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.694501 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:35:36.694758 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:35:36.694836 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.695092 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:35:36.695266 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:35:36.695606 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:35:36.695645 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:35:36.695669 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.695667 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.695786 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:35:36.695874 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:35:36.695890 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.695983 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:35:36.696136 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:35:36.696195 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:35:36.696369 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:35:36.696630 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:35:36.696793 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:35:36.703631 2183212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0120 16:35:36.704072 2183212 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:35:36.704603 2183212 main.go:141] libmachine: Using API Version  1
	I0120 16:35:36.704629 2183212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:35:36.704962 2183212 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:35:36.705156 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetState
	I0120 16:35:36.706504 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .DriverName
	I0120 16:35:36.706827 2183212 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:35:36.706845 2183212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:35:36.706865 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHHostname
	I0120 16:35:36.709564 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.709975 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f9:e0", ip: ""} in network mk-embed-certs-429406: {Iface:virbr4 ExpiryTime:2025-01-20 17:29:37 +0000 UTC Type:0 Mac:52:54:00:90:f9:e0 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:embed-certs-429406 Clientid:01:52:54:00:90:f9:e0}
	I0120 16:35:36.709997 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | domain embed-certs-429406 has defined IP address 192.168.61.123 and MAC address 52:54:00:90:f9:e0 in network mk-embed-certs-429406
	I0120 16:35:36.710163 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHPort
	I0120 16:35:36.710346 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHKeyPath
	I0120 16:35:36.710528 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .GetSSHUsername
	I0120 16:35:36.710732 2183212 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/embed-certs-429406/id_rsa Username:docker}
	I0120 16:35:36.939785 2183212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:35:36.974974 2183212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-429406" to be "Ready" ...
	I0120 16:35:37.013408 2183212 node_ready.go:49] node "embed-certs-429406" has status "Ready":"True"
	I0120 16:35:37.013435 2183212 node_ready.go:38] duration metric: took 38.400867ms for node "embed-certs-429406" to be "Ready" ...
	I0120 16:35:37.013446 2183212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:35:37.034420 2183212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-j4r8v" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:37.124686 2183212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:35:37.183026 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 16:35:37.183063 2183212 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 16:35:37.258171 2183212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 16:35:37.258210 2183212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 16:35:37.263923 2183212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:35:37.296139 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 16:35:37.296171 2183212 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 16:35:37.345266 2183212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 16:35:37.345316 2183212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 16:35:37.424140 2183212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 16:35:37.424184 2183212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 16:35:37.429176 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 16:35:37.429205 2183212 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 16:35:37.568984 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 16:35:37.569020 2183212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 16:35:37.588531 2183212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 16:35:37.720669 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 16:35:37.720703 2183212 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 16:35:37.752552 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:37.752583 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:37.752948 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Closing plugin on server side
	I0120 16:35:37.752990 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:37.752998 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:37.753006 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:37.753017 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:37.753278 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:37.753299 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:37.773532 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:37.773563 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:37.773872 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:37.773902 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:37.820457 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 16:35:37.820491 2183212 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 16:35:37.883193 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 16:35:37.883226 2183212 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 16:35:37.956429 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 16:35:37.956466 2183212 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 16:35:38.015787 2183212 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 16:35:38.015851 2183212 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 16:35:38.087587 2183212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 16:35:38.429625 2183212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.165653352s)
	I0120 16:35:38.429703 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:38.429718 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:38.430122 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:38.430148 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:38.430158 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:38.430167 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:38.430445 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:38.430463 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:39.089165 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-j4r8v" in "kube-system" namespace has status "Ready":"False"
	I0120 16:35:39.307965 2183212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.719383451s)
	I0120 16:35:39.308040 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:39.308061 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:39.308531 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:39.308553 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:39.308569 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:39.308574 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Closing plugin on server side
	I0120 16:35:39.308578 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:39.308919 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:39.308931 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:39.308943 2183212 addons.go:479] Verifying addon metrics-server=true in "embed-certs-429406"
	I0120 16:35:39.544072 2183212 pod_ready.go:93] pod "coredns-668d6bf9bc-j4r8v" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:39.544102 2183212 pod_ready.go:82] duration metric: took 2.509646222s for pod "coredns-668d6bf9bc-j4r8v" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:39.544113 2183212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:40.183595 2183212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.095952902s)
	I0120 16:35:40.183664 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:40.183677 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:40.184047 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:40.184075 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:40.184085 2183212 main.go:141] libmachine: Making call to close driver server
	I0120 16:35:40.184092 2183212 main.go:141] libmachine: (embed-certs-429406) Calling .Close
	I0120 16:35:40.184342 2183212 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:35:40.184384 2183212 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:35:40.184384 2183212 main.go:141] libmachine: (embed-certs-429406) DBG | Closing plugin on server side
	I0120 16:35:40.186925 2183212 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-429406 addons enable metrics-server
	
	I0120 16:35:40.188814 2183212 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 16:35:40.190745 2183212 addons.go:514] duration metric: took 3.548867807s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 16:35:41.553567 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace has status "Ready":"False"
	I0120 16:35:43.558937 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace has status "Ready":"False"
	I0120 16:35:46.091141 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace has status "Ready":"False"
	I0120 16:35:48.742866 2183212 pod_ready.go:103] pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace has status "Ready":"False"
	I0120 16:35:49.551449 2183212 pod_ready.go:93] pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.551490 2183212 pod_ready.go:82] duration metric: took 10.007367113s for pod "coredns-668d6bf9bc-lv579" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.551507 2183212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.557954 2183212 pod_ready.go:93] pod "etcd-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.557988 2183212 pod_ready.go:82] duration metric: took 6.46957ms for pod "etcd-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.558003 2183212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.564267 2183212 pod_ready.go:93] pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.564300 2183212 pod_ready.go:82] duration metric: took 6.285871ms for pod "kube-apiserver-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.564316 2183212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.571066 2183212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.571099 2183212 pod_ready.go:82] duration metric: took 6.772088ms for pod "kube-controller-manager-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.571115 2183212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g8f8l" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.578393 2183212 pod_ready.go:93] pod "kube-proxy-g8f8l" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.578427 2183212 pod_ready.go:82] duration metric: took 7.301439ms for pod "kube-proxy-g8f8l" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.578442 2183212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.951285 2183212 pod_ready.go:93] pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace has status "Ready":"True"
	I0120 16:35:49.951322 2183212 pod_ready.go:82] duration metric: took 372.86915ms for pod "kube-scheduler-embed-certs-429406" in "kube-system" namespace to be "Ready" ...
	I0120 16:35:49.951336 2183212 pod_ready.go:39] duration metric: took 12.937877044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:35:49.951357 2183212 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:35:49.951432 2183212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:49.978187 2183212 api_server.go:72] duration metric: took 13.336391591s to wait for apiserver process to appear ...
	I0120 16:35:49.978221 2183212 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:35:49.978253 2183212 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8443/healthz ...
	I0120 16:35:49.985595 2183212 api_server.go:279] https://192.168.61.123:8443/healthz returned 200:
	ok
	I0120 16:35:49.987488 2183212 api_server.go:141] control plane version: v1.32.0
	I0120 16:35:49.987531 2183212 api_server.go:131] duration metric: took 9.299431ms to wait for apiserver health ...
	I0120 16:35:49.987545 2183212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:35:50.152115 2183212 system_pods.go:59] 9 kube-system pods found
	I0120 16:35:50.152156 2183212 system_pods.go:61] "coredns-668d6bf9bc-j4r8v" [7137ed9c-36c3-414a-9094-a94b2dcdba8f] Running
	I0120 16:35:50.152183 2183212 system_pods.go:61] "coredns-668d6bf9bc-lv579" [b1f10e1e-e962-4239-829f-1bdc6430465a] Running
	I0120 16:35:50.152202 2183212 system_pods.go:61] "etcd-embed-certs-429406" [4934b301-6b6a-4143-b648-a03348b299a0] Running
	I0120 16:35:50.152208 2183212 system_pods.go:61] "kube-apiserver-embed-certs-429406" [0091ce6b-9e9a-4ac5-995e-93151a30da26] Running
	I0120 16:35:50.152215 2183212 system_pods.go:61] "kube-controller-manager-embed-certs-429406" [edbea842-993d-433c-a3f4-861c4a46c05d] Running
	I0120 16:35:50.152221 2183212 system_pods.go:61] "kube-proxy-g8f8l" [8f4a7869-50d5-4d74-a00f-f78fe8d24122] Running
	I0120 16:35:50.152226 2183212 system_pods.go:61] "kube-scheduler-embed-certs-429406" [0113797a-4417-47ec-bbfe-88fb33747ac9] Running
	I0120 16:35:50.152236 2183212 system_pods.go:61] "metrics-server-f79f97bbb-qnvqf" [2082e56a-aa58-49b4-8a5b-6b3896224219] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 16:35:50.152244 2183212 system_pods.go:61] "storage-provisioner" [a15c77fe-2d7e-4543-bf3e-a142e56398b8] Running
	I0120 16:35:50.152258 2183212 system_pods.go:74] duration metric: took 164.704829ms to wait for pod list to return data ...
	I0120 16:35:50.152271 2183212 default_sa.go:34] waiting for default service account to be created ...
	I0120 16:35:50.351838 2183212 default_sa.go:45] found service account: "default"
	I0120 16:35:50.351879 2183212 default_sa.go:55] duration metric: took 199.596562ms for default service account to be created ...
	I0120 16:35:50.351893 2183212 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 16:35:50.551958 2183212 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-429406 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-429406 -n embed-certs-429406
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-429406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-429406 logs -n 25: (1.512128051s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo docker                        | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo find                          | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo crio                          | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p flannel-708138                                    | flannel-708138         | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	| delete  | -p old-k8s-version-806597                            | old-k8s-version-806597 | jenkins | v1.35.0 | 20 Jan 25 16:55 UTC | 20 Jan 25 16:55 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:42:32.008473 2197206 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:42:32.008621 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008635 2197206 out.go:358] Setting ErrFile to fd 2...
	I0120 16:42:32.008642 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008834 2197206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:42:32.009438 2197206 out.go:352] Setting JSON to false
	I0120 16:42:32.010574 2197206 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":30298,"bootTime":1737361054,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:42:32.010728 2197206 start.go:139] virtualization: kvm guest
	I0120 16:42:32.013230 2197206 out.go:177] * [bridge-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:42:32.014892 2197206 notify.go:220] Checking for updates...
	I0120 16:42:32.014906 2197206 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:42:32.016448 2197206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:42:32.017869 2197206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:42:32.019315 2197206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:32.020696 2197206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:42:32.022005 2197206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:42:32.023905 2197206 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024041 2197206 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024168 2197206 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:42:32.024283 2197206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:42:32.065664 2197206 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:42:32.067124 2197206 start.go:297] selected driver: kvm2
	I0120 16:42:32.067147 2197206 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:42:32.067160 2197206 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:42:32.067963 2197206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.068068 2197206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:42:32.087530 2197206 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:42:32.087602 2197206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:42:32.087872 2197206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:42:32.087908 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:42:32.087916 2197206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:42:32.087987 2197206 start.go:340] cluster config:
	{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0120 16:42:32.088138 2197206 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.090276 2197206 out.go:177] * Starting "bridge-708138" primary control-plane node in "bridge-708138" cluster
	I0120 16:42:30.420362 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:30.421027 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:30.421059 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:30.420994 2195575 retry.go:31] will retry after 3.907613054s: waiting for domain to come up
	I0120 16:42:32.091652 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:32.091722 2197206 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:42:32.091737 2197206 cache.go:56] Caching tarball of preloaded images
	I0120 16:42:32.091846 2197206 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:42:32.091859 2197206 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:42:32.091963 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:42:32.091983 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json: {Name:mk67d90943d59835916cc1f1dddad0547daa252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:32.092126 2197206 start.go:360] acquireMachinesLock for bridge-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:42:34.330849 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:34.331412 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:34.331455 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:34.331358 2195575 retry.go:31] will retry after 5.584556774s: waiting for domain to come up
	I0120 16:42:41.479851 2197206 start.go:364] duration metric: took 9.387696864s to acquireMachinesLock for "bridge-708138"
	I0120 16:42:41.479942 2197206 start.go:93] Provisioning new machine with config: &{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:42:41.480071 2197206 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:42:41.482328 2197206 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:42:41.482654 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:42:41.482727 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:42:41.499933 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0120 16:42:41.500357 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:42:41.500878 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:42:41.500905 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:42:41.501247 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:42:41.501477 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:42:41.501622 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:42:41.501777 2197206 start.go:159] libmachine.API.Create for "bridge-708138" (driver="kvm2")
	I0120 16:42:41.501811 2197206 client.go:168] LocalClient.Create starting
	I0120 16:42:41.501865 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:42:41.501911 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.501942 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502018 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:42:41.502048 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.502079 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502119 2197206 main.go:141] libmachine: Running pre-create checks...
	I0120 16:42:41.502134 2197206 main.go:141] libmachine: (bridge-708138) Calling .PreCreateCheck
	I0120 16:42:41.502482 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:42:41.503075 2197206 main.go:141] libmachine: Creating machine...
	I0120 16:42:41.503098 2197206 main.go:141] libmachine: (bridge-708138) Calling .Create
	I0120 16:42:41.503237 2197206 main.go:141] libmachine: (bridge-708138) creating KVM machine...
	I0120 16:42:41.503270 2197206 main.go:141] libmachine: (bridge-708138) creating network...
	I0120 16:42:41.504580 2197206 main.go:141] libmachine: (bridge-708138) DBG | found existing default KVM network
	I0120 16:42:41.506204 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.505980 2197289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:9b:e5} reservation:<nil>}
	I0120 16:42:41.507221 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.507124 2197289 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:0e:01} reservation:<nil>}
	I0120 16:42:41.508246 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.508159 2197289 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:2c:a8} reservation:<nil>}
	I0120 16:42:41.509727 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.509645 2197289 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a19c0}
	I0120 16:42:41.509792 2197206 main.go:141] libmachine: (bridge-708138) DBG | created network xml: 
	I0120 16:42:41.509817 2197206 main.go:141] libmachine: (bridge-708138) DBG | <network>
	I0120 16:42:41.509828 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <name>mk-bridge-708138</name>
	I0120 16:42:41.509848 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <dns enable='no'/>
	I0120 16:42:41.509881 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509906 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 16:42:41.509920 2197206 main.go:141] libmachine: (bridge-708138) DBG |     <dhcp>
	I0120 16:42:41.509931 2197206 main.go:141] libmachine: (bridge-708138) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 16:42:41.509939 2197206 main.go:141] libmachine: (bridge-708138) DBG |     </dhcp>
	I0120 16:42:41.509943 2197206 main.go:141] libmachine: (bridge-708138) DBG |   </ip>
	I0120 16:42:41.509948 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509953 2197206 main.go:141] libmachine: (bridge-708138) DBG | </network>
	I0120 16:42:41.509966 2197206 main.go:141] libmachine: (bridge-708138) DBG | 
	I0120 16:42:41.515816 2197206 main.go:141] libmachine: (bridge-708138) DBG | trying to create private KVM network mk-bridge-708138 192.168.72.0/24...
	I0120 16:42:41.591057 2197206 main.go:141] libmachine: (bridge-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:41.591103 2197206 main.go:141] libmachine: (bridge-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:42:41.591115 2197206 main.go:141] libmachine: (bridge-708138) DBG | private KVM network mk-bridge-708138 192.168.72.0/24 created
	I0120 16:42:41.591137 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.590985 2197289 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:41.591176 2197206 main.go:141] libmachine: (bridge-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:42:41.878512 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.878362 2197289 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa...
	I0120 16:42:39.917690 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918271 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has current primary IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918301 2195552 main.go:141] libmachine: (flannel-708138) found domain IP: 192.168.39.206
	I0120 16:42:39.918314 2195552 main.go:141] libmachine: (flannel-708138) reserving static IP address...
	I0120 16:42:39.918709 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find host DHCP lease matching {name: "flannel-708138", mac: "52:54:00:ff:a2:3d", ip: "192.168.39.206"} in network mk-flannel-708138
	I0120 16:42:40.002772 2195552 main.go:141] libmachine: (flannel-708138) DBG | Getting to WaitForSSH function...
	I0120 16:42:40.002812 2195552 main.go:141] libmachine: (flannel-708138) reserved static IP address 192.168.39.206 for domain flannel-708138
	I0120 16:42:40.002826 2195552 main.go:141] libmachine: (flannel-708138) waiting for SSH...
	I0120 16:42:40.005462 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.005818 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.005841 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.006030 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH client type: external
	I0120 16:42:40.006070 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa (-rw-------)
	I0120 16:42:40.006114 2195552 main.go:141] libmachine: (flannel-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:42:40.006136 2195552 main.go:141] libmachine: (flannel-708138) DBG | About to run SSH command:
	I0120 16:42:40.006152 2195552 main.go:141] libmachine: (flannel-708138) DBG | exit 0
	I0120 16:42:40.135269 2195552 main.go:141] libmachine: (flannel-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:42:40.135526 2195552 main.go:141] libmachine: (flannel-708138) KVM machine creation complete
	I0120 16:42:40.135876 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:40.136615 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.136828 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.137011 2195552 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:42:40.137029 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:42:40.138406 2195552 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:42:40.138423 2195552 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:42:40.138452 2195552 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:42:40.138464 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.140844 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141163 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.141205 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141321 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.141497 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141697 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141855 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.142022 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.142224 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.142236 2195552 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:42:40.250660 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.250692 2195552 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:42:40.250703 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.253520 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.253863 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.253919 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.254020 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.254263 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254462 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254593 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.254769 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.254954 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.254966 2195552 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:42:40.371879 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:42:40.371990 2195552 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:42:40.372011 2195552 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:42:40.372023 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372291 2195552 buildroot.go:166] provisioning hostname "flannel-708138"
	I0120 16:42:40.372320 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372554 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.375287 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375686 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.375717 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375925 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.376151 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376353 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376496 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.376659 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.376836 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.376848 2195552 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-708138 && echo "flannel-708138" | sudo tee /etc/hostname
	I0120 16:42:40.501787 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-708138
	
	I0120 16:42:40.501820 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.504836 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505242 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.505267 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.505652 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505809 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505915 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.506087 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.506277 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.506293 2195552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:42:40.628479 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.628514 2195552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:42:40.628580 2195552 buildroot.go:174] setting up certificates
	I0120 16:42:40.628599 2195552 provision.go:84] configureAuth start
	I0120 16:42:40.628618 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.628897 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:40.631696 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632058 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.632103 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632242 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.634596 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.634957 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.634983 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.635147 2195552 provision.go:143] copyHostCerts
	I0120 16:42:40.635203 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:42:40.635213 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:42:40.635282 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:42:40.635416 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:42:40.635427 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:42:40.635466 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:42:40.635533 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:42:40.635540 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:42:40.635560 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:42:40.635622 2195552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.flannel-708138 san=[127.0.0.1 192.168.39.206 flannel-708138 localhost minikube]
	I0120 16:42:40.788476 2195552 provision.go:177] copyRemoteCerts
	I0120 16:42:40.788537 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:42:40.788565 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.791448 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.791862 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.791889 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.792091 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.792295 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.792425 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.792541 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:40.877555 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:42:40.904115 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0120 16:42:40.933842 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:42:40.962366 2195552 provision.go:87] duration metric: took 333.749236ms to configureAuth
	I0120 16:42:40.962401 2195552 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:42:40.962639 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:40.962740 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.965753 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966102 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.966137 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966346 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.966578 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966794 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966936 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.967135 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.967319 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.967333 2195552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:42:41.219615 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:42:41.219649 2195552 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:42:41.219660 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetURL
	I0120 16:42:41.220953 2195552 main.go:141] libmachine: (flannel-708138) DBG | using libvirt version 6000000
	I0120 16:42:41.223183 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223607 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.223639 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223729 2195552 main.go:141] libmachine: Docker is up and running!
	I0120 16:42:41.223743 2195552 main.go:141] libmachine: Reticulating splines...
	I0120 16:42:41.223752 2195552 client.go:171] duration metric: took 27.127384878s to LocalClient.Create
	I0120 16:42:41.223781 2195552 start.go:167] duration metric: took 27.127453023s to libmachine.API.Create "flannel-708138"
	I0120 16:42:41.223794 2195552 start.go:293] postStartSetup for "flannel-708138" (driver="kvm2")
	I0120 16:42:41.223803 2195552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:42:41.223831 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.224099 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:42:41.224137 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.226284 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226568 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.226594 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226810 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.226999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.227158 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.227283 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.313516 2195552 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:42:41.318553 2195552 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:42:41.318588 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:42:41.318691 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:42:41.318822 2195552 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:42:41.318966 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:42:41.329039 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:41.359288 2195552 start.go:296] duration metric: took 135.474673ms for postStartSetup
	I0120 16:42:41.359376 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:41.360116 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.363418 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.363768 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.363797 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.364037 2195552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json ...
	I0120 16:42:41.364306 2195552 start.go:128] duration metric: took 27.289215285s to createHost
	I0120 16:42:41.364339 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.366928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367308 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.367345 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367538 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.367729 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367894 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.368153 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:41.368324 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:41.368333 2195552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:42:41.479683 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391361.443756218
	
	I0120 16:42:41.479715 2195552 fix.go:216] guest clock: 1737391361.443756218
	I0120 16:42:41.479725 2195552 fix.go:229] Guest: 2025-01-20 16:42:41.443756218 +0000 UTC Remote: 2025-01-20 16:42:41.364324183 +0000 UTC m=+27.417363622 (delta=79.432035ms)
	I0120 16:42:41.479753 2195552 fix.go:200] guest clock delta is within tolerance: 79.432035ms
	I0120 16:42:41.479760 2195552 start.go:83] releasing machines lock for "flannel-708138", held for 27.404795771s
	I0120 16:42:41.479795 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.480084 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.483114 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483496 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.483519 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483702 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484306 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484533 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484636 2195552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:42:41.484681 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.484751 2195552 ssh_runner.go:195] Run: cat /version.json
	I0120 16:42:41.484776 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.487833 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.487927 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488372 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488399 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488422 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488436 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488512 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488602 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488694 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488757 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488853 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.488899 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.489003 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.489094 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.599954 2195552 ssh_runner.go:195] Run: systemctl --version
	I0120 16:42:41.607089 2195552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:42:41.776515 2195552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:42:41.783949 2195552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:42:41.784065 2195552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:42:41.801321 2195552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:42:41.801352 2195552 start.go:495] detecting cgroup driver to use...
	I0120 16:42:41.801424 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:42:41.819201 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:42:41.834731 2195552 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:42:41.834824 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:42:41.850093 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:42:41.865030 2195552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:42:41.992116 2195552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:42:42.163387 2195552 docker.go:233] disabling docker service ...
	I0120 16:42:42.163482 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:42:42.179064 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:42:42.194832 2195552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:42:42.325738 2195552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:42:42.463211 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:42:42.478104 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:42:42.498097 2195552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:42:42.498191 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.510081 2195552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:42:42.510166 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.523170 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.535401 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.550805 2195552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:42:42.563405 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.575131 2195552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.594402 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.606285 2195552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:42:42.616785 2195552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:42:42.616863 2195552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:42:42.631836 2195552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:42:42.643068 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:42.774308 2195552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:42:42.883190 2195552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:42:42.883286 2195552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:42:42.889890 2195552 start.go:563] Will wait 60s for crictl version
	I0120 16:42:42.889963 2195552 ssh_runner.go:195] Run: which crictl
	I0120 16:42:42.895340 2195552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:42:42.953318 2195552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:42:42.953426 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:42.988671 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:43.023504 2195552 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:42:43.024796 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:43.030238 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.030849 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:43.030886 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.031145 2195552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:42:43.036477 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:43.051619 2195552 kubeadm.go:883] updating cluster {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:42:43.051797 2195552 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:43.051875 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:43.095932 2195552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:42:43.096025 2195552 ssh_runner.go:195] Run: which lz4
	I0120 16:42:43.101037 2195552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:42:43.106099 2195552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:42:43.106139 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:42:42.022498 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022333 2197289 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk...
	I0120 16:42:42.022537 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing magic tar header
	I0120 16:42:42.022550 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing SSH key tar header
	I0120 16:42:42.022558 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022472 2197289 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:42.022576 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138
	I0120 16:42:42.022676 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 (perms=drwx------)
	I0120 16:42:42.022704 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:42:42.022716 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:42:42.022728 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:42.022745 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:42:42.022762 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:42:42.022771 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:42:42.022780 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home
	I0120 16:42:42.022821 2197206 main.go:141] libmachine: (bridge-708138) DBG | skipping /home - not owner
	I0120 16:42:42.022845 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:42:42.022858 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:42:42.022869 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:42:42.022883 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:42:42.022898 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:42.024254 2197206 main.go:141] libmachine: (bridge-708138) define libvirt domain using xml: 
	I0120 16:42:42.024299 2197206 main.go:141] libmachine: (bridge-708138) <domain type='kvm'>
	I0120 16:42:42.024309 2197206 main.go:141] libmachine: (bridge-708138)   <name>bridge-708138</name>
	I0120 16:42:42.024317 2197206 main.go:141] libmachine: (bridge-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:42:42.024329 2197206 main.go:141] libmachine: (bridge-708138)   <vcpu>2</vcpu>
	I0120 16:42:42.024341 2197206 main.go:141] libmachine: (bridge-708138)   <features>
	I0120 16:42:42.024352 2197206 main.go:141] libmachine: (bridge-708138)     <acpi/>
	I0120 16:42:42.024360 2197206 main.go:141] libmachine: (bridge-708138)     <apic/>
	I0120 16:42:42.024370 2197206 main.go:141] libmachine: (bridge-708138)     <pae/>
	I0120 16:42:42.024375 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024382 2197206 main.go:141] libmachine: (bridge-708138)   </features>
	I0120 16:42:42.024395 2197206 main.go:141] libmachine: (bridge-708138)   <cpu mode='host-passthrough'>
	I0120 16:42:42.024433 2197206 main.go:141] libmachine: (bridge-708138)   
	I0120 16:42:42.024460 2197206 main.go:141] libmachine: (bridge-708138)   </cpu>
	I0120 16:42:42.024482 2197206 main.go:141] libmachine: (bridge-708138)   <os>
	I0120 16:42:42.024498 2197206 main.go:141] libmachine: (bridge-708138)     <type>hvm</type>
	I0120 16:42:42.024508 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='cdrom'/>
	I0120 16:42:42.024514 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='hd'/>
	I0120 16:42:42.024522 2197206 main.go:141] libmachine: (bridge-708138)     <bootmenu enable='no'/>
	I0120 16:42:42.024526 2197206 main.go:141] libmachine: (bridge-708138)   </os>
	I0120 16:42:42.024533 2197206 main.go:141] libmachine: (bridge-708138)   <devices>
	I0120 16:42:42.024544 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='cdrom'>
	I0120 16:42:42.024558 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/boot2docker.iso'/>
	I0120 16:42:42.024574 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:42:42.024583 2197206 main.go:141] libmachine: (bridge-708138)       <readonly/>
	I0120 16:42:42.024604 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024617 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='disk'>
	I0120 16:42:42.024629 2197206 main.go:141] libmachine: (bridge-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:42:42.024646 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk'/>
	I0120 16:42:42.024661 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:42:42.024672 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024682 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024691 2197206 main.go:141] libmachine: (bridge-708138)       <source network='mk-bridge-708138'/>
	I0120 16:42:42.024701 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024709 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024723 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024747 2197206 main.go:141] libmachine: (bridge-708138)       <source network='default'/>
	I0120 16:42:42.024765 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024776 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024786 2197206 main.go:141] libmachine: (bridge-708138)     <serial type='pty'>
	I0120 16:42:42.024791 2197206 main.go:141] libmachine: (bridge-708138)       <target port='0'/>
	I0120 16:42:42.024796 2197206 main.go:141] libmachine: (bridge-708138)     </serial>
	I0120 16:42:42.024802 2197206 main.go:141] libmachine: (bridge-708138)     <console type='pty'>
	I0120 16:42:42.024807 2197206 main.go:141] libmachine: (bridge-708138)       <target type='serial' port='0'/>
	I0120 16:42:42.024814 2197206 main.go:141] libmachine: (bridge-708138)     </console>
	I0120 16:42:42.024823 2197206 main.go:141] libmachine: (bridge-708138)     <rng model='virtio'>
	I0120 16:42:42.024843 2197206 main.go:141] libmachine: (bridge-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:42:42.024857 2197206 main.go:141] libmachine: (bridge-708138)     </rng>
	I0120 16:42:42.024871 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024886 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024898 2197206 main.go:141] libmachine: (bridge-708138)   </devices>
	I0120 16:42:42.024905 2197206 main.go:141] libmachine: (bridge-708138) </domain>
	I0120 16:42:42.024917 2197206 main.go:141] libmachine: (bridge-708138) 
	I0120 16:42:42.029557 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:92:a4:fd in network default
	I0120 16:42:42.030218 2197206 main.go:141] libmachine: (bridge-708138) starting domain...
	I0120 16:42:42.030248 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:42.030257 2197206 main.go:141] libmachine: (bridge-708138) ensuring networks are active...
	I0120 16:42:42.031044 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network default is active
	I0120 16:42:42.031601 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network mk-bridge-708138 is active
	I0120 16:42:42.032382 2197206 main.go:141] libmachine: (bridge-708138) getting domain XML...
	I0120 16:42:42.033582 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:43.399268 2197206 main.go:141] libmachine: (bridge-708138) waiting for IP...
	I0120 16:42:43.400313 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.400849 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.400943 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.400854 2197289 retry.go:31] will retry after 255.464218ms: waiting for domain to come up
	I0120 16:42:43.658464 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.659186 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.659219 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.659154 2197289 retry.go:31] will retry after 266.392686ms: waiting for domain to come up
	I0120 16:42:43.928079 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.928991 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.929026 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.928961 2197289 retry.go:31] will retry after 451.40279ms: waiting for domain to come up
	I0120 16:42:44.382040 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.382828 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.382874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.382787 2197289 retry.go:31] will retry after 443.359812ms: waiting for domain to come up
	I0120 16:42:44.827744 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.828300 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.828402 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.828290 2197289 retry.go:31] will retry after 735.012761ms: waiting for domain to come up
	I0120 16:42:45.565132 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:45.565770 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:45.565798 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:45.565735 2197289 retry.go:31] will retry after 744.342493ms: waiting for domain to come up
	I0120 16:42:46.311596 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:46.312274 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:46.312307 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:46.312254 2197289 retry.go:31] will retry after 1.044734911s: waiting for domain to come up
	I0120 16:42:44.760474 2195552 crio.go:462] duration metric: took 1.659486395s to copy over tarball
	I0120 16:42:44.760562 2195552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:42:47.285354 2195552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.524736784s)
	I0120 16:42:47.285446 2195552 crio.go:469] duration metric: took 2.524929922s to extract the tarball
	I0120 16:42:47.285471 2195552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:42:47.324858 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:47.372415 2195552 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:42:47.372446 2195552 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:42:47.372457 2195552 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.32.0 crio true true} ...
	I0120 16:42:47.372643 2195552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0120 16:42:47.372722 2195552 ssh_runner.go:195] Run: crio config
	I0120 16:42:47.422488 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:42:47.422519 2195552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:42:47.422554 2195552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-708138 NodeName:flannel-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:42:47.422786 2195552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:42:47.422890 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:42:47.433846 2195552 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:42:47.433938 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:42:47.444578 2195552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0120 16:42:47.461856 2195552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:42:47.478765 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 16:42:47.495925 2195552 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0120 16:42:47.500231 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:47.513503 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:47.646909 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:42:47.666731 2195552 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138 for IP: 192.168.39.206
	I0120 16:42:47.666760 2195552 certs.go:194] generating shared ca certs ...
	I0120 16:42:47.666784 2195552 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.666988 2195552 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:42:47.667058 2195552 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:42:47.667071 2195552 certs.go:256] generating profile certs ...
	I0120 16:42:47.667161 2195552 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key
	I0120 16:42:47.667181 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt with IP's: []
	I0120 16:42:47.957732 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt ...
	I0120 16:42:47.957764 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt: {Name:mk2f64b37e464c896144cdc44cfc1fc4f548c045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.957936 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key ...
	I0120 16:42:47.957947 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key: {Name:mk1b16a48ea06faf15a739043d6a562a12842ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.958021 2195552 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76
	I0120 16:42:47.958037 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0120 16:42:48.237739 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 ...
	I0120 16:42:48.237772 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76: {Name:mk2d82f1b438734a66d4bca5d26768f17a50dbb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.237934 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 ...
	I0120 16:42:48.237945 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76: {Name:mk5552939933befe1ef0d3a7fff6d21fdf398d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.238016 2195552 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt
	I0120 16:42:48.238119 2195552 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key
	I0120 16:42:48.238183 2195552 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key
	I0120 16:42:48.238205 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt with IP's: []
	I0120 16:42:48.328536 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt ...
	I0120 16:42:48.328597 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt: {Name:mk71903f0dc1f4b5602bf3f87a72991a3294fe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328771 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key ...
	I0120 16:42:48.328786 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key: {Name:mkb6cb1df1b5d7b66259c1ec746be1ba174817a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328986 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:42:48.329026 2195552 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:42:48.329038 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:42:48.329061 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:42:48.329085 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:42:48.329113 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:42:48.329155 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:48.329806 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:42:48.377022 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:42:48.423232 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:42:48.452106 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:42:48.484435 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:42:48.514707 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:42:48.541159 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:42:48.642490 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:42:48.668101 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:42:48.696379 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:42:48.722994 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:42:48.748145 2195552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:42:48.766358 2195552 ssh_runner.go:195] Run: openssl version
	I0120 16:42:48.773160 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:42:48.785416 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791084 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791158 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.797932 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:42:48.811525 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:42:48.826046 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832200 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832280 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.838879 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:42:48.851808 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:42:48.865253 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870647 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870724 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.877010 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:42:48.889902 2195552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:42:48.894559 2195552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:42:48.894640 2195552 kubeadm.go:392] StartCluster: {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:42:48.894779 2195552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:42:48.894890 2195552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:42:48.940887 2195552 cri.go:89] found id: ""
	I0120 16:42:48.940984 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:42:48.952531 2195552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:42:48.963786 2195552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:42:48.974250 2195552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:42:48.974278 2195552 kubeadm.go:157] found existing configuration files:
	
	I0120 16:42:48.974338 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:42:48.984449 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:42:48.984527 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:42:48.995330 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:42:49.006034 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:42:49.006104 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:42:49.017110 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.027295 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:42:49.027368 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.040812 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:42:49.051290 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:42:49.051377 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:42:49.066485 2195552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:42:49.134741 2195552 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:42:49.134946 2195552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:42:49.249160 2195552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:42:49.249323 2195552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:42:49.249481 2195552 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:42:49.263796 2195552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:42:47.358916 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:47.359566 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:47.359596 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:47.359554 2197289 retry.go:31] will retry after 1.461778861s: waiting for domain to come up
	I0120 16:42:48.823504 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:48.824115 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:48.824147 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:48.824084 2197289 retry.go:31] will retry after 1.249679155s: waiting for domain to come up
	I0120 16:42:50.075499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:50.076082 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:50.076116 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:50.076030 2197289 retry.go:31] will retry after 2.28026185s: waiting for domain to come up
	I0120 16:42:49.298061 2195552 out.go:235]   - Generating certificates and keys ...
	I0120 16:42:49.298271 2195552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:42:49.298360 2195552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:42:49.326405 2195552 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:42:49.603739 2195552 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:42:50.017706 2195552 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:42:50.212861 2195552 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:42:50.332005 2195552 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:42:50.332365 2195552 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.576915 2195552 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:42:50.577225 2195552 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.922540 2195552 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:42:51.148072 2195552 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:42:51.262833 2195552 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:42:51.262930 2195552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:42:51.404906 2195552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:42:51.648067 2195552 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:42:51.759756 2195552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:42:51.962741 2195552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:42:52.453700 2195552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:42:52.456041 2195552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:42:52.459366 2195552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:42:52.461278 2195552 out.go:235]   - Booting up control plane ...
	I0120 16:42:52.461391 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:42:52.461507 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:42:52.461588 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:42:52.484769 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:42:52.493367 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:42:52.493452 2195552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:42:52.663075 2195552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:42:52.664096 2195552 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:42:52.357734 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:52.358411 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:52.358493 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:52.358391 2197289 retry.go:31] will retry after 2.232137635s: waiting for domain to come up
	I0120 16:42:54.592598 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:54.593256 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:54.593288 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:54.593159 2197289 retry.go:31] will retry after 3.499879042s: waiting for domain to come up
	I0120 16:42:54.164599 2195552 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501261507s
	I0120 16:42:54.164721 2195552 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:42:59.162803 2195552 kubeadm.go:310] [api-check] The API server is healthy after 5.001059076s
	I0120 16:42:59.182087 2195552 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:42:59.202928 2195552 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:42:59.251598 2195552 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:42:59.251870 2195552 kubeadm.go:310] [mark-control-plane] Marking the node flannel-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:42:59.267327 2195552 kubeadm.go:310] [bootstrap-token] Using token: 0uevl5.w9rl7hild7q3qmvj
	I0120 16:42:59.268924 2195552 out.go:235]   - Configuring RBAC rules ...
	I0120 16:42:59.269076 2195552 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:42:59.276545 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:42:59.290974 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:42:59.296882 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:42:59.304061 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:42:59.311324 2195552 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:42:59.571703 2195552 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:42:59.999391 2195552 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:00.569884 2195552 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:00.572667 2195552 kubeadm.go:310] 
	I0120 16:43:00.572758 2195552 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:00.572768 2195552 kubeadm.go:310] 
	I0120 16:43:00.572931 2195552 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:00.572966 2195552 kubeadm.go:310] 
	I0120 16:43:00.573016 2195552 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:00.573090 2195552 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:00.573154 2195552 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:00.573163 2195552 kubeadm.go:310] 
	I0120 16:43:00.573251 2195552 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:00.573265 2195552 kubeadm.go:310] 
	I0120 16:43:00.573345 2195552 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:00.573378 2195552 kubeadm.go:310] 
	I0120 16:43:00.573475 2195552 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:00.573604 2195552 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:00.573697 2195552 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:00.573707 2195552 kubeadm.go:310] 
	I0120 16:43:00.573823 2195552 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:00.573923 2195552 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:00.573930 2195552 kubeadm.go:310] 
	I0120 16:43:00.574048 2195552 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574201 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:00.574235 2195552 kubeadm.go:310] 	--control-plane 
	I0120 16:43:00.574258 2195552 kubeadm.go:310] 
	I0120 16:43:00.574400 2195552 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:00.574432 2195552 kubeadm.go:310] 
	I0120 16:43:00.574590 2195552 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574795 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:00.575007 2195552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:00.575049 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:43:00.576721 2195552 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0120 16:42:58.094988 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:58.095844 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:58.095874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:58.095719 2197289 retry.go:31] will retry after 4.384762232s: waiting for domain to come up
	I0120 16:43:00.577996 2195552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 16:43:00.584504 2195552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 16:43:00.584526 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0120 16:43:00.610147 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 16:43:01.108354 2195552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:01.108472 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.108474 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=flannel-708138 minikube.k8s.io/primary=true
	I0120 16:43:01.153107 2195552 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:01.323188 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.823589 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.324096 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.823844 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.323872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.823872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.323604 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.428740 2195552 kubeadm.go:1113] duration metric: took 3.320348756s to wait for elevateKubeSystemPrivileges
	I0120 16:43:04.428788 2195552 kubeadm.go:394] duration metric: took 15.534153444s to StartCluster
	I0120 16:43:04.428816 2195552 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.428921 2195552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:04.430989 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.431307 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:04.431303 2195552 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:04.431336 2195552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:04.431519 2195552 addons.go:69] Setting storage-provisioner=true in profile "flannel-708138"
	I0120 16:43:04.431529 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:04.431538 2195552 addons.go:238] Setting addon storage-provisioner=true in "flannel-708138"
	I0120 16:43:04.431579 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.431586 2195552 addons.go:69] Setting default-storageclass=true in profile "flannel-708138"
	I0120 16:43:04.431621 2195552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-708138"
	I0120 16:43:04.432070 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432112 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.432118 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432151 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.435123 2195552 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:04.436595 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:04.449431 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0120 16:43:04.449469 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0120 16:43:04.450031 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450065 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450628 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450657 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.450772 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450798 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.451074 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451199 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.451674 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.451723 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.455136 2195552 addons.go:238] Setting addon default-storageclass=true in "flannel-708138"
	I0120 16:43:04.455176 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.455442 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.455480 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.468668 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0120 16:43:04.469232 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.469794 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.469810 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.470234 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.470456 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.471939 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I0120 16:43:04.472364 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.472464 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.472904 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.472933 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.473322 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.473822 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.473860 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.474444 2195552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:04.475956 2195552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.475976 2195552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:04.475998 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.479414 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.479895 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.479928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.480056 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.480246 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.480426 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.480560 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.491228 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0120 16:43:04.491682 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.492333 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.492364 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.492740 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.492924 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.494696 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.494958 2195552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:04.494975 2195552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:04.494997 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.497642 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498099 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.498131 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498258 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.498486 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.498649 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.498811 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.741102 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:04.741114 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:04.889912 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.966678 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:05.319499 2195552 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:05.321208 2195552 node_ready.go:35] waiting up to 15m0s for node "flannel-708138" to be "Ready" ...
	I0120 16:43:05.578109 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578136 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578257 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578282 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578512 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.578539 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.578550 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578558 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580280 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580297 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580296 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580313 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580323 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580333 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.580340 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580334 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580582 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580586 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580600 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.591009 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.591045 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.591353 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.591368 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.591377 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.593936 2195552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:02.482109 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:02.482647 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:43:02.482679 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:43:02.482582 2197289 retry.go:31] will retry after 5.49113903s: waiting for domain to come up
	I0120 16:43:05.595175 2195552 addons.go:514] duration metric: took 1.163842267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:05.824160 2195552 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-708138" context rescaled to 1 replicas
	I0120 16:43:07.325793 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:07.975570 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976154 2197206 main.go:141] libmachine: (bridge-708138) found domain IP: 192.168.72.88
	I0120 16:43:07.976182 2197206 main.go:141] libmachine: (bridge-708138) reserving static IP address...
	I0120 16:43:07.976192 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has current primary IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976560 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find host DHCP lease matching {name: "bridge-708138", mac: "52:54:00:d9:89:1c", ip: "192.168.72.88"} in network mk-bridge-708138
	I0120 16:43:08.062745 2197206 main.go:141] libmachine: (bridge-708138) reserved static IP address 192.168.72.88 for domain bridge-708138
	I0120 16:43:08.062784 2197206 main.go:141] libmachine: (bridge-708138) DBG | Getting to WaitForSSH function...
	I0120 16:43:08.062792 2197206 main.go:141] libmachine: (bridge-708138) waiting for SSH...
	I0120 16:43:08.065921 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066430 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.066483 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066582 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH client type: external
	I0120 16:43:08.066651 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa (-rw-------)
	I0120 16:43:08.066681 2197206 main.go:141] libmachine: (bridge-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:43:08.066697 2197206 main.go:141] libmachine: (bridge-708138) DBG | About to run SSH command:
	I0120 16:43:08.066706 2197206 main.go:141] libmachine: (bridge-708138) DBG | exit 0
	I0120 16:43:08.195445 2197206 main.go:141] libmachine: (bridge-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:43:08.195759 2197206 main.go:141] libmachine: (bridge-708138) KVM machine creation complete
	I0120 16:43:08.196070 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:08.196739 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197017 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197188 2197206 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:43:08.197231 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:08.198995 2197206 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:43:08.199011 2197206 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:43:08.199017 2197206 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:43:08.199022 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.201755 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202123 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.202152 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202261 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.202473 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202647 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202790 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.202975 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.203249 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.203266 2197206 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:43:08.310341 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.310368 2197206 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:43:08.310376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.313249 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313593 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.313617 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313753 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.313976 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314162 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314330 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.314548 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.314788 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.314803 2197206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:43:08.424018 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:43:08.424146 2197206 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:43:08.424160 2197206 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:43:08.424174 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424466 2197206 buildroot.go:166] provisioning hostname "bridge-708138"
	I0120 16:43:08.424517 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424725 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.427305 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427686 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.427715 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427863 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.428207 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428411 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428534 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.428719 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.428965 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.428985 2197206 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-708138 && echo "bridge-708138" | sudo tee /etc/hostname
	I0120 16:43:08.551195 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-708138
	
	I0120 16:43:08.551238 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.554014 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554390 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.554423 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554574 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.554806 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.554968 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.555124 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.555257 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.555452 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.555467 2197206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:43:08.673244 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.673286 2197206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:43:08.673324 2197206 buildroot.go:174] setting up certificates
	I0120 16:43:08.673340 2197206 provision.go:84] configureAuth start
	I0120 16:43:08.673357 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.673699 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:08.676632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.676968 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.677000 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.677175 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.679290 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679603 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.679632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679786 2197206 provision.go:143] copyHostCerts
	I0120 16:43:08.679847 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:43:08.679859 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:43:08.679915 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:43:08.680004 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:43:08.680019 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:43:08.680038 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:43:08.680087 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:43:08.680094 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:43:08.680113 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:43:08.680159 2197206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.bridge-708138 san=[127.0.0.1 192.168.72.88 bridge-708138 localhost minikube]
	I0120 16:43:08.795436 2197206 provision.go:177] copyRemoteCerts
	I0120 16:43:08.795532 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:43:08.795567 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.798390 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798751 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.798784 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798951 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.799157 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.799316 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.799470 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:08.890925 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:43:08.918903 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 16:43:08.946784 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:43:08.972830 2197206 provision.go:87] duration metric: took 299.472419ms to configureAuth
	I0120 16:43:08.972860 2197206 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:43:08.973105 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:08.973209 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.976107 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976516 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.976547 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976758 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.977001 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977195 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977372 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.977552 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.977793 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.977818 2197206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:43:09.218079 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:43:09.218113 2197206 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:43:09.218121 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetURL
	I0120 16:43:09.219440 2197206 main.go:141] libmachine: (bridge-708138) DBG | using libvirt version 6000000
	I0120 16:43:09.221519 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.221903 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.221936 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.222152 2197206 main.go:141] libmachine: Docker is up and running!
	I0120 16:43:09.222170 2197206 main.go:141] libmachine: Reticulating splines...
	I0120 16:43:09.222180 2197206 client.go:171] duration metric: took 27.720355771s to LocalClient.Create
	I0120 16:43:09.222209 2197206 start.go:167] duration metric: took 27.720430833s to libmachine.API.Create "bridge-708138"
	I0120 16:43:09.222223 2197206 start.go:293] postStartSetup for "bridge-708138" (driver="kvm2")
	I0120 16:43:09.222236 2197206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:43:09.222269 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.222508 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:43:09.222546 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.224660 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.224997 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.225028 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.225135 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.225326 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.225514 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.225714 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.311781 2197206 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:43:09.316438 2197206 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:43:09.316477 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:43:09.316558 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:43:09.316649 2197206 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:43:09.316749 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:43:09.329422 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:09.358995 2197206 start.go:296] duration metric: took 136.756187ms for postStartSetup
	I0120 16:43:09.359076 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:09.359720 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.362855 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363228 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.363298 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363532 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:43:09.363729 2197206 start.go:128] duration metric: took 27.883644045s to createHost
	I0120 16:43:09.363752 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.367222 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367703 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.367728 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367889 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.368112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368248 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.368536 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:09.368750 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:09.368769 2197206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:43:09.476152 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391389.460433936
	
	I0120 16:43:09.476186 2197206 fix.go:216] guest clock: 1737391389.460433936
	I0120 16:43:09.476208 2197206 fix.go:229] Guest: 2025-01-20 16:43:09.460433936 +0000 UTC Remote: 2025-01-20 16:43:09.363740668 +0000 UTC m=+37.396826539 (delta=96.693268ms)
	I0120 16:43:09.476239 2197206 fix.go:200] guest clock delta is within tolerance: 96.693268ms
	I0120 16:43:09.476250 2197206 start.go:83] releasing machines lock for "bridge-708138", held for 27.996351856s
	I0120 16:43:09.476280 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.476552 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.479629 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480100 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.480130 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480293 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480785 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480979 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.481115 2197206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:43:09.481163 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.481228 2197206 ssh_runner.go:195] Run: cat /version.json
	I0120 16:43:09.481255 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.484029 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484438 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.484465 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484487 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484809 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.484960 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.485013 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485036 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.485249 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.485266 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485476 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485524 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.485634 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485801 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.572916 2197206 ssh_runner.go:195] Run: systemctl --version
	I0120 16:43:09.609198 2197206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:43:09.772783 2197206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:43:09.779241 2197206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:43:09.779347 2197206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:43:09.796029 2197206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:43:09.796066 2197206 start.go:495] detecting cgroup driver to use...
	I0120 16:43:09.796162 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:43:09.813742 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:43:09.828707 2197206 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:43:09.828775 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:43:09.843309 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:43:09.858188 2197206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:43:09.984031 2197206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:43:10.146631 2197206 docker.go:233] disabling docker service ...
	I0120 16:43:10.146719 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:43:10.162952 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:43:10.176639 2197206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:43:10.313460 2197206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:43:10.449221 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:43:10.464620 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:43:10.484192 2197206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:43:10.484261 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.496517 2197206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:43:10.496623 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.508222 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.519634 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.531216 2197206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:43:10.543258 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.557639 2197206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.580753 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.592908 2197206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:43:10.604469 2197206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:43:10.604557 2197206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:43:10.619774 2197206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:43:10.630917 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:10.771445 2197206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:43:10.858491 2197206 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:43:10.858594 2197206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:43:10.863619 2197206 start.go:563] Will wait 60s for crictl version
	I0120 16:43:10.863674 2197206 ssh_runner.go:195] Run: which crictl
	I0120 16:43:10.867761 2197206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:43:10.910094 2197206 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:43:10.910202 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.946319 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.984785 2197206 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:43:10.986112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:10.989054 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989473 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:10.989499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989835 2197206 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:43:10.994705 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:11.009975 2197206 kubeadm.go:883] updating cluster {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:43:11.010149 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:43:11.010226 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:11.045673 2197206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:43:11.045764 2197206 ssh_runner.go:195] Run: which lz4
	I0120 16:43:11.050364 2197206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:43:11.054940 2197206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:43:11.054978 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:43:09.824714 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:11.826450 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:12.645258 2197206 crio.go:462] duration metric: took 1.594939639s to copy over tarball
	I0120 16:43:12.645365 2197206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:43:15.071062 2197206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425659919s)
	I0120 16:43:15.071103 2197206 crio.go:469] duration metric: took 2.425799615s to extract the tarball
	I0120 16:43:15.071114 2197206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:43:15.111615 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:15.156900 2197206 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:43:15.156926 2197206 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:43:15.156936 2197206 kubeadm.go:934] updating node { 192.168.72.88 8443 v1.32.0 crio true true} ...
	I0120 16:43:15.157067 2197206 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0120 16:43:15.157162 2197206 ssh_runner.go:195] Run: crio config
	I0120 16:43:15.208647 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:15.208676 2197206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:43:15.208699 2197206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.88 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-708138 NodeName:bridge-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:43:15.208830 2197206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.88"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.88"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:43:15.208898 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:43:15.220035 2197206 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:43:15.220130 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:43:15.230274 2197206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:43:15.250389 2197206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:43:15.268846 2197206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0120 16:43:15.288060 2197206 ssh_runner.go:195] Run: grep 192.168.72.88	control-plane.minikube.internal$ /etc/hosts
	I0120 16:43:15.293094 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:15.307503 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:15.448214 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:15.471118 2197206 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138 for IP: 192.168.72.88
	I0120 16:43:15.471147 2197206 certs.go:194] generating shared ca certs ...
	I0120 16:43:15.471165 2197206 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.471331 2197206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:43:15.471386 2197206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:43:15.471396 2197206 certs.go:256] generating profile certs ...
	I0120 16:43:15.471452 2197206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key
	I0120 16:43:15.471479 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt with IP's: []
	I0120 16:43:15.891023 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt ...
	I0120 16:43:15.891061 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: {Name:mk81b32ec31af688b6d4652fb2789449b6bb041c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891285 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key ...
	I0120 16:43:15.891309 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key: {Name:mk3bbf7430f7b04957959e169acea17d8973d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891454 2197206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5
	I0120 16:43:15.891482 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.88]
	I0120 16:43:16.021148 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 ...
	I0120 16:43:16.021182 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5: {Name:mk56a312fc5ec12eb4e10626dc4fa18ded44019d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021396 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 ...
	I0120 16:43:16.021416 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5: {Name:mk71d4978edbd5634298d6328a82e57dfdcb21df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021521 2197206 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt
	I0120 16:43:16.021621 2197206 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key
	I0120 16:43:16.021684 2197206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key
	I0120 16:43:16.021701 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt with IP's: []
	I0120 16:43:16.200719 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt ...
	I0120 16:43:16.200752 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt: {Name:mk1b93fabdfdbe923ba4bd4bdcee8aa4ee4eb6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.200944 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key ...
	I0120 16:43:16.200964 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key: {Name:mk47f0abf782077fe358b23835f1924f393006e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.201182 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:43:16.201225 2197206 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:43:16.201236 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:43:16.201260 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:43:16.201283 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:43:16.201303 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:43:16.201340 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:16.201918 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:43:16.237391 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:43:16.277743 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:43:16.306735 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:43:16.334792 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:43:16.363266 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:43:16.391982 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:43:16.419674 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:43:16.446802 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:43:16.474961 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:43:16.503997 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:43:16.530572 2197206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:43:16.548971 2197206 ssh_runner.go:195] Run: openssl version
	I0120 16:43:16.555413 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:43:16.567053 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571897 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571974 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.578136 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:43:16.590223 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:43:16.602984 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.607971 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.608083 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.614296 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:43:16.626015 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:43:16.639800 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645006 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645084 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.651449 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:43:16.663469 2197206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:43:16.668102 2197206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:43:16.668167 2197206 kubeadm.go:392] StartCluster: {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:43:16.668285 2197206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:43:16.668340 2197206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:43:16.706702 2197206 cri.go:89] found id: ""
	I0120 16:43:16.706804 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:43:16.718586 2197206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:43:16.729343 2197206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:43:16.740887 2197206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:43:16.740911 2197206 kubeadm.go:157] found existing configuration files:
	
	I0120 16:43:16.740975 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:43:16.753083 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:43:16.753151 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:43:16.764580 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:43:16.776660 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:43:16.776739 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:43:16.787809 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.800110 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:43:16.800203 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.811124 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:43:16.822087 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:43:16.822160 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:43:16.834957 2197206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:43:16.902421 2197206 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:43:16.902553 2197206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:43:17.042455 2197206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:43:17.042629 2197206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:43:17.042798 2197206 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:43:17.053323 2197206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:43:14.324786 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:16.325269 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:18.393718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:17.321797 2197206 out.go:235]   - Generating certificates and keys ...
	I0120 16:43:17.321934 2197206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:43:17.322011 2197206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:43:17.402336 2197206 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:43:17.536347 2197206 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:43:17.688442 2197206 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:43:17.858918 2197206 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:43:18.183422 2197206 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:43:18.183672 2197206 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.264748 2197206 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:43:18.264953 2197206 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.426217 2197206 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:43:18.686494 2197206 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:43:18.828457 2197206 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:43:18.828691 2197206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:43:18.955301 2197206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:43:19.046031 2197206 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:43:19.231335 2197206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:43:19.447816 2197206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:43:19.619053 2197206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:43:19.619607 2197206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:43:19.622288 2197206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:43:19.624157 2197206 out.go:235]   - Booting up control plane ...
	I0120 16:43:19.624275 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:43:19.624380 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:43:19.624476 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:43:19.646471 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:43:19.657842 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:43:19.657931 2197206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:43:19.804616 2197206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:43:19.804743 2197206 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:43:20.315932 2197206 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.124273ms
	I0120 16:43:20.316084 2197206 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:43:20.825198 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:23.325444 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:25.818525 2197206 kubeadm.go:310] [api-check] The API server is healthy after 5.503297043s
	I0120 16:43:25.835132 2197206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:43:25.869802 2197206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:43:25.925988 2197206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:43:25.926216 2197206 kubeadm.go:310] [mark-control-plane] Marking the node bridge-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:43:25.952439 2197206 kubeadm.go:310] [bootstrap-token] Using token: xw20yr.9359ar4c28065art
	I0120 16:43:25.954040 2197206 out.go:235]   - Configuring RBAC rules ...
	I0120 16:43:25.954189 2197206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:43:25.971234 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:43:25.984672 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:43:25.992321 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:43:25.998352 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:43:26.005011 2197206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:43:26.224365 2197206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:43:26.676446 2197206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:27.225715 2197206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:27.229867 2197206 kubeadm.go:310] 
	I0120 16:43:27.229970 2197206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:27.229988 2197206 kubeadm.go:310] 
	I0120 16:43:27.230128 2197206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:27.230149 2197206 kubeadm.go:310] 
	I0120 16:43:27.230187 2197206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:27.230280 2197206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:27.230366 2197206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:27.230377 2197206 kubeadm.go:310] 
	I0120 16:43:27.230453 2197206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:27.230469 2197206 kubeadm.go:310] 
	I0120 16:43:27.230530 2197206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:27.230540 2197206 kubeadm.go:310] 
	I0120 16:43:27.230633 2197206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:27.230741 2197206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:27.230840 2197206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:27.230850 2197206 kubeadm.go:310] 
	I0120 16:43:27.230947 2197206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:27.231060 2197206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:27.231069 2197206 kubeadm.go:310] 
	I0120 16:43:27.231168 2197206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231293 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:27.231325 2197206 kubeadm.go:310] 	--control-plane 
	I0120 16:43:27.231336 2197206 kubeadm.go:310] 
	I0120 16:43:27.231463 2197206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:27.231479 2197206 kubeadm.go:310] 
	I0120 16:43:27.231554 2197206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231702 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:27.232406 2197206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:27.232502 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:27.235020 2197206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:43:25.325819 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.325884 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.236381 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:43:27.251582 2197206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:43:27.277986 2197206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:27.278066 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.278083 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=bridge-708138 minikube.k8s.io/primary=true
	I0120 16:43:27.318132 2197206 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:27.454138 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.955129 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.454750 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.954684 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.454513 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.955223 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.455022 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.954199 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.454428 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.606545 2197206 kubeadm.go:1113] duration metric: took 4.328571416s to wait for elevateKubeSystemPrivileges
	I0120 16:43:31.606592 2197206 kubeadm.go:394] duration metric: took 14.938431891s to StartCluster
	I0120 16:43:31.606633 2197206 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.606774 2197206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:31.609525 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.609884 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:31.609885 2197206 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:31.609984 2197206 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:31.610121 2197206 addons.go:69] Setting storage-provisioner=true in profile "bridge-708138"
	I0120 16:43:31.610144 2197206 addons.go:238] Setting addon storage-provisioner=true in "bridge-708138"
	I0120 16:43:31.610141 2197206 addons.go:69] Setting default-storageclass=true in profile "bridge-708138"
	I0120 16:43:31.610154 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:31.610166 2197206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-708138"
	I0120 16:43:31.610193 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.610720 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610774 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.610788 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610837 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.611842 2197206 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:31.613454 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:31.628647 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I0120 16:43:31.628881 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0120 16:43:31.629232 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629383 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629930 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.629952 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630016 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.630040 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630423 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.630687 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.630689 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.631256 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.631304 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.634974 2197206 addons.go:238] Setting addon default-storageclass=true in "bridge-708138"
	I0120 16:43:31.635030 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.635335 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.635387 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.649021 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0120 16:43:31.649452 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.651254 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.651285 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.651867 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.652059 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.653726 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0120 16:43:31.654126 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.654296 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.654915 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.654928 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.655380 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.655949 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.656008 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.656646 2197206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:31.658066 2197206 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:31.658082 2197206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:31.658099 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.661450 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.661729 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.661760 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.662030 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.662235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.662397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.662550 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.676457 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0120 16:43:31.677019 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.677756 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.677789 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.678148 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.678385 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.680320 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.680609 2197206 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:31.680630 2197206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:31.680655 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.683331 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.683716 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.683795 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.684017 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.684235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.684397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.684535 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.936634 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:31.936728 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:31.976057 2197206 node_ready.go:35] waiting up to 15m0s for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985329 2197206 node_ready.go:49] node "bridge-708138" has status "Ready":"True"
	I0120 16:43:31.985356 2197206 node_ready.go:38] duration metric: took 9.257739ms for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985368 2197206 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:31.995641 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:32.055183 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:32.153090 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:32.568616 2197206 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:32.853746 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853781 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.853900 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853924 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854124 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854175 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854180 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854222 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.854226 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854268 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854280 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854197 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854356 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854138 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856214 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856226 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856289 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.856306 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856355 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856368 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.874144 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.874173 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.874543 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.874584 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.874595 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.876336 2197206 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:29.825256 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:31.826538 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:32.877697 2197206 addons.go:514] duration metric: took 1.267734381s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:33.076155 2197206 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-708138" context rescaled to 1 replicas
	I0120 16:43:33.998522 2197206 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998557 2197206 pod_ready.go:82] duration metric: took 2.002870414s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	E0120 16:43:33.998571 2197206 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998581 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:36.006241 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:34.324997 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:36.326016 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.825101 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.504747 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:40.005785 2197206 pod_ready.go:93] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.005813 2197206 pod_ready.go:82] duration metric: took 6.007222936s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.005823 2197206 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011217 2197206 pod_ready.go:93] pod "etcd-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.011239 2197206 pod_ready.go:82] duration metric: took 5.409716ms for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011248 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016613 2197206 pod_ready.go:93] pod "kube-apiserver-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.016634 2197206 pod_ready.go:82] duration metric: took 5.379045ms for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016643 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021777 2197206 pod_ready.go:93] pod "kube-controller-manager-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.021806 2197206 pod_ready.go:82] duration metric: took 5.155108ms for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021818 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028255 2197206 pod_ready.go:93] pod "kube-proxy-gz7x6" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.028280 2197206 pod_ready.go:82] duration metric: took 6.454274ms for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028289 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403358 2197206 pod_ready.go:93] pod "kube-scheduler-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.403389 2197206 pod_ready.go:82] duration metric: took 375.092058ms for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403398 2197206 pod_ready.go:39] duration metric: took 8.418019424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:40.403415 2197206 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:43:40.403470 2197206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:43:40.420906 2197206 api_server.go:72] duration metric: took 8.810975265s to wait for apiserver process to appear ...
	I0120 16:43:40.420936 2197206 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:43:40.420959 2197206 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0120 16:43:40.427501 2197206 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0120 16:43:40.428593 2197206 api_server.go:141] control plane version: v1.32.0
	I0120 16:43:40.428625 2197206 api_server.go:131] duration metric: took 7.680154ms to wait for apiserver health ...
	I0120 16:43:40.428636 2197206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:43:40.607673 2197206 system_pods.go:59] 7 kube-system pods found
	I0120 16:43:40.607711 2197206 system_pods.go:61] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:40.607716 2197206 system_pods.go:61] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:40.607719 2197206 system_pods.go:61] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:40.607723 2197206 system_pods.go:61] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:40.607727 2197206 system_pods.go:61] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:40.607730 2197206 system_pods.go:61] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:40.607733 2197206 system_pods.go:61] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:40.607740 2197206 system_pods.go:74] duration metric: took 179.093225ms to wait for pod list to return data ...
	I0120 16:43:40.607747 2197206 default_sa.go:34] waiting for default service account to be created ...
	I0120 16:43:40.803775 2197206 default_sa.go:45] found service account: "default"
	I0120 16:43:40.803805 2197206 default_sa.go:55] duration metric: took 196.051704ms for default service account to be created ...
	I0120 16:43:40.803813 2197206 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 16:43:41.006405 2197206 system_pods.go:87] 7 kube-system pods found
	I0120 16:43:41.203196 2197206 system_pods.go:105] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:41.203220 2197206 system_pods.go:105] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:41.203225 2197206 system_pods.go:105] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:41.203230 2197206 system_pods.go:105] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:41.203234 2197206 system_pods.go:105] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:41.203238 2197206 system_pods.go:105] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:41.203243 2197206 system_pods.go:105] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:41.203251 2197206 system_pods.go:147] duration metric: took 399.431194ms to wait for k8s-apps to be running ...
	I0120 16:43:41.203259 2197206 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 16:43:41.203319 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:43:41.218649 2197206 system_svc.go:56] duration metric: took 15.377778ms WaitForService to wait for kubelet
	I0120 16:43:41.218683 2197206 kubeadm.go:582] duration metric: took 9.608759794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:43:41.218707 2197206 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:43:41.404150 2197206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:43:41.404181 2197206 node_conditions.go:123] node cpu capacity is 2
	I0120 16:43:41.404194 2197206 node_conditions.go:105] duration metric: took 185.483174ms to run NodePressure ...
	I0120 16:43:41.404207 2197206 start.go:241] waiting for startup goroutines ...
	I0120 16:43:41.404213 2197206 start.go:246] waiting for cluster config update ...
	I0120 16:43:41.404225 2197206 start.go:255] writing updated cluster config ...
	I0120 16:43:41.404496 2197206 ssh_runner.go:195] Run: rm -f paused
	I0120 16:43:41.457290 2197206 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 16:43:41.459151 2197206 out.go:177] * Done! kubectl is now configured to use "bridge-708138" cluster and "default" namespace by default
	I0120 16:43:40.825164 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:43.325186 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:45.825830 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:48.325148 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:50.325324 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:52.825144 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:54.825386 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:57.325511 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:59.825432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:01.826019 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:04.324951 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:06.327813 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:08.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:10.825618 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:13.325998 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:15.824909 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:18.325253 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:20.325659 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:22.825615 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:25.324569 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:27.324668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:29.325114 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:31.824591 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:33.825417 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:36.325425 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:38.326595 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:40.825370 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:43.325332 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:45.825470 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:48.325279 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:50.825752 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:53.326233 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:55.327674 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:57.824868 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:59.825796 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:02.325316 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:04.325859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:06.825325 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:09.325718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:11.825001 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:14.324938 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:16.325124 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:18.325501 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:20.825364 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:22.827208 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:25.325469 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:27.825982 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:30.325432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:32.325551 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:34.825047 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:36.825526 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:39.325753 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:41.825898 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:44.325151 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:46.325219 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:48.325661 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:50.826115 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:53.325524 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:55.825672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:57.825995 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:00.325672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:02.824695 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:04.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:07.325274 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:09.325798 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:11.824561 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:13.825167 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:15.825328 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:18.324814 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:20.824710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:22.825668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:25.325111 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:27.824859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:29.825200 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:32.328676 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:34.825710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:36.826122 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:39.324220 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:41.324710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:43.325287 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:45.325431 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:47.824648 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:49.825286 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:51.825539 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:53.825772 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:55.826486 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:58.324721 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:00.325134 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:02.825138 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324759 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324796 2195552 node_ready.go:38] duration metric: took 4m0.003559137s for node "flannel-708138" to be "Ready" ...
	I0120 16:47:05.327110 2195552 out.go:201] 
	W0120 16:47:05.328484 2195552 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0120 16:47:05.328509 2195552 out.go:270] * 
	W0120 16:47:05.329391 2195552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:47:05.331128 2195552 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.499134888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392164499059956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c723891-86ae-494d-8728-e45d9c9a3141 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.500320087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=704e7ff4-b201-40b0-8bd7-175334aefc27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.500375326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=704e7ff4-b201-40b0-8bd7-175334aefc27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.500629194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f,PodSandboxId:a3d9650a9c0de89f3d26d0ebda7839031352f0a69fc93e36d3c707b6a773978a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737391894020664180,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-ct9fc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 54237406-002d-410e-a0a9-1881bfed567c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd1c3d85fe37e2ed60cb00aa0f496d41aac79bcf48496a8525f24b1ca62b821,PodSandboxId:d77eb226aca1d821c73e6002c80c056d7870fb2367cf34186c1e7ac7cacabe91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737390955409995865,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dbl6c,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 23772131-e6c9-415d-9094-40daece3ca65,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a80745e14f06233180d59108cb266aefd3f6139ca2a3f30ad558609407c843,PodSandboxId:45b7716543d8dbe5360ce6b60db18e7699feb4c03a5267b93d0d01073c8978f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737390939267731454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15c77fe-2d7e-4543-bf3e-a142e56398b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb56e00db0c1651584ee399432cc72128f9086a6127153aae1fdc58a2cfee604,PodSandboxId:3ba2bf03b3054b2c0d184e61497f8f8174caefec9b45494d8a9d74a58041d929,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938392960924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j4r8v,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 7137ed9c-36c3-414a-9094-a94b2dcdba8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f45d36775002f945f9b8456462f3987abaf6baccb6bb63a656ae9d0c9eae135,PodSandboxId:073cd26103257be832a1407212a8d03e061e3b77d73b6257af9097b9521c9bc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938304918041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lv579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1f10e1e-e962-4239-829f-1bdc6430465a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88053dbaefe69bc9206f12d229ec50296fbcc1afff68b29b0df4fb28d0b30c59,PodSandboxId:9db8178f274b1d16b53310cfc918fdc333555147793c0a54428ecade1434148c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737390937022056732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8f8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4a7869-50d5-4d74-a00f-f78fe8d24122,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadf22ac36f82e5a797d50957e8369589812caa476b90994963e935e738a7a7d,PodSandboxId:500b4cfcb6663683496322e4fbbd904a7b3be0ab215cc87489802affc7d094d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737390926525191732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab467047d1b57d4428704709bcddb2f7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea3fd1d65b4bcd5807b15b5a5c91aed1d7fdce2dbef517697fef3263b74295e,PodSandboxId:d756cab0f7662c8c513001ac8153b4fc1e5e5291f07964bfb76d1fb87c1ae13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737390926527652472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a9e448fb6779bcbf725d1208d2a616,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0281bdd0c238e081dc73927ae464a36674e076b794d7b851f3b2a1240d2942,PodSandboxId:5b76392863a1a1e48e83c3b60bc83dcc0ad67f1d739f470d6bb4facfed7cef0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737390926509145958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb962c4e66b7b24240175c82296fcde,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eff35b19ef8754b999f3b2776284917ba3fa0c5c28cccd2d61e7f9bcbcc782c,PodSandboxId:af436b92744de9e82d772f9ae45de7ccd84709bea36d565a08bfc7312c75d6b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737390926473616981,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d114d69c857daf755f5a8acd085a88ddf9574fb3b0b305bb67b03adffb9c2e04,PodSandboxId:f5081a5ca18720d2c73fa127135f95b488d7dd62a4c5e8cf5315cf9008f89db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737390615212769469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=704e7ff4-b201-40b0-8bd7-175334aefc27 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.545553853Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5811c3b-4bae-4182-8418-c2a448fc2cf4 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.545629268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5811c3b-4bae-4182-8418-c2a448fc2cf4 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.547352838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1edb93f8-1683-4dd5-a62d-bd6f7d4bcf75 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.547784690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392164547762731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1edb93f8-1683-4dd5-a62d-bd6f7d4bcf75 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.548943095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de91d750-e0b3-4f5a-bc2f-6431c3637e26 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.549019003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de91d750-e0b3-4f5a-bc2f-6431c3637e26 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.549319581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f,PodSandboxId:a3d9650a9c0de89f3d26d0ebda7839031352f0a69fc93e36d3c707b6a773978a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737391894020664180,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-ct9fc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 54237406-002d-410e-a0a9-1881bfed567c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd1c3d85fe37e2ed60cb00aa0f496d41aac79bcf48496a8525f24b1ca62b821,PodSandboxId:d77eb226aca1d821c73e6002c80c056d7870fb2367cf34186c1e7ac7cacabe91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737390955409995865,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dbl6c,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 23772131-e6c9-415d-9094-40daece3ca65,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a80745e14f06233180d59108cb266aefd3f6139ca2a3f30ad558609407c843,PodSandboxId:45b7716543d8dbe5360ce6b60db18e7699feb4c03a5267b93d0d01073c8978f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737390939267731454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15c77fe-2d7e-4543-bf3e-a142e56398b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb56e00db0c1651584ee399432cc72128f9086a6127153aae1fdc58a2cfee604,PodSandboxId:3ba2bf03b3054b2c0d184e61497f8f8174caefec9b45494d8a9d74a58041d929,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938392960924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j4r8v,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 7137ed9c-36c3-414a-9094-a94b2dcdba8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f45d36775002f945f9b8456462f3987abaf6baccb6bb63a656ae9d0c9eae135,PodSandboxId:073cd26103257be832a1407212a8d03e061e3b77d73b6257af9097b9521c9bc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938304918041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lv579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1f10e1e-e962-4239-829f-1bdc6430465a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88053dbaefe69bc9206f12d229ec50296fbcc1afff68b29b0df4fb28d0b30c59,PodSandboxId:9db8178f274b1d16b53310cfc918fdc333555147793c0a54428ecade1434148c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737390937022056732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8f8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4a7869-50d5-4d74-a00f-f78fe8d24122,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadf22ac36f82e5a797d50957e8369589812caa476b90994963e935e738a7a7d,PodSandboxId:500b4cfcb6663683496322e4fbbd904a7b3be0ab215cc87489802affc7d094d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737390926525191732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab467047d1b57d4428704709bcddb2f7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea3fd1d65b4bcd5807b15b5a5c91aed1d7fdce2dbef517697fef3263b74295e,PodSandboxId:d756cab0f7662c8c513001ac8153b4fc1e5e5291f07964bfb76d1fb87c1ae13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737390926527652472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a9e448fb6779bcbf725d1208d2a616,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0281bdd0c238e081dc73927ae464a36674e076b794d7b851f3b2a1240d2942,PodSandboxId:5b76392863a1a1e48e83c3b60bc83dcc0ad67f1d739f470d6bb4facfed7cef0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737390926509145958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb962c4e66b7b24240175c82296fcde,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eff35b19ef8754b999f3b2776284917ba3fa0c5c28cccd2d61e7f9bcbcc782c,PodSandboxId:af436b92744de9e82d772f9ae45de7ccd84709bea36d565a08bfc7312c75d6b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737390926473616981,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d114d69c857daf755f5a8acd085a88ddf9574fb3b0b305bb67b03adffb9c2e04,PodSandboxId:f5081a5ca18720d2c73fa127135f95b488d7dd62a4c5e8cf5315cf9008f89db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737390615212769469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de91d750-e0b3-4f5a-bc2f-6431c3637e26 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.585911500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5a1a96b-4485-4b33-b819-3186f2e1c6ab name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.585983414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5a1a96b-4485-4b33-b819-3186f2e1c6ab name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.587799814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3552c835-80bf-42be-abb5-397978522c8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.588327016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392164588305535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3552c835-80bf-42be-abb5-397978522c8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.588884279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b53400e-b2b5-4efd-8226-dc4023545f34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.588937991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b53400e-b2b5-4efd-8226-dc4023545f34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.589247636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f,PodSandboxId:a3d9650a9c0de89f3d26d0ebda7839031352f0a69fc93e36d3c707b6a773978a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737391894020664180,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-ct9fc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 54237406-002d-410e-a0a9-1881bfed567c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd1c3d85fe37e2ed60cb00aa0f496d41aac79bcf48496a8525f24b1ca62b821,PodSandboxId:d77eb226aca1d821c73e6002c80c056d7870fb2367cf34186c1e7ac7cacabe91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737390955409995865,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dbl6c,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 23772131-e6c9-415d-9094-40daece3ca65,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a80745e14f06233180d59108cb266aefd3f6139ca2a3f30ad558609407c843,PodSandboxId:45b7716543d8dbe5360ce6b60db18e7699feb4c03a5267b93d0d01073c8978f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737390939267731454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15c77fe-2d7e-4543-bf3e-a142e56398b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb56e00db0c1651584ee399432cc72128f9086a6127153aae1fdc58a2cfee604,PodSandboxId:3ba2bf03b3054b2c0d184e61497f8f8174caefec9b45494d8a9d74a58041d929,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938392960924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j4r8v,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 7137ed9c-36c3-414a-9094-a94b2dcdba8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f45d36775002f945f9b8456462f3987abaf6baccb6bb63a656ae9d0c9eae135,PodSandboxId:073cd26103257be832a1407212a8d03e061e3b77d73b6257af9097b9521c9bc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938304918041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lv579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1f10e1e-e962-4239-829f-1bdc6430465a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88053dbaefe69bc9206f12d229ec50296fbcc1afff68b29b0df4fb28d0b30c59,PodSandboxId:9db8178f274b1d16b53310cfc918fdc333555147793c0a54428ecade1434148c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737390937022056732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8f8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4a7869-50d5-4d74-a00f-f78fe8d24122,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadf22ac36f82e5a797d50957e8369589812caa476b90994963e935e738a7a7d,PodSandboxId:500b4cfcb6663683496322e4fbbd904a7b3be0ab215cc87489802affc7d094d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737390926525191732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab467047d1b57d4428704709bcddb2f7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea3fd1d65b4bcd5807b15b5a5c91aed1d7fdce2dbef517697fef3263b74295e,PodSandboxId:d756cab0f7662c8c513001ac8153b4fc1e5e5291f07964bfb76d1fb87c1ae13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737390926527652472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a9e448fb6779bcbf725d1208d2a616,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0281bdd0c238e081dc73927ae464a36674e076b794d7b851f3b2a1240d2942,PodSandboxId:5b76392863a1a1e48e83c3b60bc83dcc0ad67f1d739f470d6bb4facfed7cef0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737390926509145958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb962c4e66b7b24240175c82296fcde,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eff35b19ef8754b999f3b2776284917ba3fa0c5c28cccd2d61e7f9bcbcc782c,PodSandboxId:af436b92744de9e82d772f9ae45de7ccd84709bea36d565a08bfc7312c75d6b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737390926473616981,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d114d69c857daf755f5a8acd085a88ddf9574fb3b0b305bb67b03adffb9c2e04,PodSandboxId:f5081a5ca18720d2c73fa127135f95b488d7dd62a4c5e8cf5315cf9008f89db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737390615212769469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b53400e-b2b5-4efd-8226-dc4023545f34 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.628263295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39913ac0-6a73-49a6-bf57-11077eef656b name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.628384391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39913ac0-6a73-49a6-bf57-11077eef656b name=/runtime.v1.RuntimeService/Version
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.629590778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28f9bcea-4156-47dd-92f3-b71d8537223a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.630053566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392164630030631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28f9bcea-4156-47dd-92f3-b71d8537223a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.630676217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6795db39-a09c-490c-a2ef-31f5dd6b1c4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.630748694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6795db39-a09c-490c-a2ef-31f5dd6b1c4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:56:04 embed-certs-429406 crio[729]: time="2025-01-20 16:56:04.630976337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f,PodSandboxId:a3d9650a9c0de89f3d26d0ebda7839031352f0a69fc93e36d3c707b6a773978a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737391894020664180,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-ct9fc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 54237406-002d-410e-a0a9-1881bfed567c,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd1c3d85fe37e2ed60cb00aa0f496d41aac79bcf48496a8525f24b1ca62b821,PodSandboxId:d77eb226aca1d821c73e6002c80c056d7870fb2367cf34186c1e7ac7cacabe91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737390955409995865,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dbl6c,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 23772131-e6c9-415d-9094-40daece3ca65,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a80745e14f06233180d59108cb266aefd3f6139ca2a3f30ad558609407c843,PodSandboxId:45b7716543d8dbe5360ce6b60db18e7699feb4c03a5267b93d0d01073c8978f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737390939267731454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15c77fe-2d7e-4543-bf3e-a142e56398b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb56e00db0c1651584ee399432cc72128f9086a6127153aae1fdc58a2cfee604,PodSandboxId:3ba2bf03b3054b2c0d184e61497f8f8174caefec9b45494d8a9d74a58041d929,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938392960924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j4r8v,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 7137ed9c-36c3-414a-9094-a94b2dcdba8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f45d36775002f945f9b8456462f3987abaf6baccb6bb63a656ae9d0c9eae135,PodSandboxId:073cd26103257be832a1407212a8d03e061e3b77d73b6257af9097b9521c9bc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737390938304918041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lv579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1f10e1e-e962-4239-829f-1bdc6430465a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88053dbaefe69bc9206f12d229ec50296fbcc1afff68b29b0df4fb28d0b30c59,PodSandboxId:9db8178f274b1d16b53310cfc918fdc333555147793c0a54428ecade1434148c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737390937022056732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g8f8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4a7869-50d5-4d74-a00f-f78fe8d24122,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadf22ac36f82e5a797d50957e8369589812caa476b90994963e935e738a7a7d,PodSandboxId:500b4cfcb6663683496322e4fbbd904a7b3be0ab215cc87489802affc7d094d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737390926525191732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab467047d1b57d4428704709bcddb2f7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea3fd1d65b4bcd5807b15b5a5c91aed1d7fdce2dbef517697fef3263b74295e,PodSandboxId:d756cab0f7662c8c513001ac8153b4fc1e5e5291f07964bfb76d1fb87c1ae13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737390926527652472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a9e448fb6779bcbf725d1208d2a616,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0281bdd0c238e081dc73927ae464a36674e076b794d7b851f3b2a1240d2942,PodSandboxId:5b76392863a1a1e48e83c3b60bc83dcc0ad67f1d739f470d6bb4facfed7cef0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737390926509145958,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb962c4e66b7b24240175c82296fcde,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eff35b19ef8754b999f3b2776284917ba3fa0c5c28cccd2d61e7f9bcbcc782c,PodSandboxId:af436b92744de9e82d772f9ae45de7ccd84709bea36d565a08bfc7312c75d6b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737390926473616981,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d114d69c857daf755f5a8acd085a88ddf9574fb3b0b305bb67b03adffb9c2e04,PodSandboxId:f5081a5ca18720d2c73fa127135f95b488d7dd62a4c5e8cf5315cf9008f89db1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737390615212769469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-429406,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c31340235aaeedb0926f7086afb2f18,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6795db39-a09c-490c-a2ef-31f5dd6b1c4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d9c4e9afdba4a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   a3d9650a9c0de       dashboard-metrics-scraper-86c6bf9756-ct9fc
	5fd1c3d85fe37       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   d77eb226aca1d       kubernetes-dashboard-7779f9b69b-dbl6c
	41a80745e14f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 minutes ago      Running             storage-provisioner         0                   45b7716543d8d       storage-provisioner
	bb56e00db0c16       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   3ba2bf03b3054       coredns-668d6bf9bc-j4r8v
	9f45d36775002       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   073cd26103257       coredns-668d6bf9bc-lv579
	88053dbaefe69       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           20 minutes ago      Running             kube-proxy                  0                   9db8178f274b1       kube-proxy-g8f8l
	3ea3fd1d65b4b       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           20 minutes ago      Running             kube-scheduler              2                   d756cab0f7662       kube-scheduler-embed-certs-429406
	cadf22ac36f82       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           20 minutes ago      Running             etcd                        2                   500b4cfcb6663       etcd-embed-certs-429406
	1e0281bdd0c23       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           20 minutes ago      Running             kube-controller-manager     3                   5b76392863a1a       kube-controller-manager-embed-certs-429406
	8eff35b19ef87       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           20 minutes ago      Running             kube-apiserver              3                   af436b92744de       kube-apiserver-embed-certs-429406
	d114d69c857da       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           25 minutes ago      Exited              kube-apiserver              2                   f5081a5ca1872       kube-apiserver-embed-certs-429406
	
	
	==> coredns [9f45d36775002f945f9b8456462f3987abaf6baccb6bb63a656ae9d0c9eae135] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [bb56e00db0c1651584ee399432cc72128f9086a6127153aae1fdc58a2cfee604] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-429406
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-429406
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=embed-certs-429406
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T16_35_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 16:35:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-429406
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 16:55:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 16:51:12 +0000   Mon, 20 Jan 2025 16:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 16:51:12 +0000   Mon, 20 Jan 2025 16:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 16:51:12 +0000   Mon, 20 Jan 2025 16:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 16:51:12 +0000   Mon, 20 Jan 2025 16:35:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.123
	  Hostname:    embed-certs-429406
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 94bb2e68feec460fa821c32bc00c408e
	  System UUID:                94bb2e68-feec-460f-a821-c32bc00c408e
	  Boot ID:                    50cf57e6-f9ff-4ba7-9186-1706b88c9cd0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-j4r8v                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-lv579                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-embed-certs-429406                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-embed-certs-429406             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-429406    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-g8f8l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-429406             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-qnvqf                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-ct9fc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-dbl6c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node embed-certs-429406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node embed-certs-429406 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node embed-certs-429406 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node embed-certs-429406 event: Registered Node embed-certs-429406 in Controller
	
	
	==> dmesg <==
	[  +2.888728] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.616593] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.228857] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.060292] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056906] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.213346] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.117388] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.323255] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.523162] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.062540] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.161077] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[Jan20 16:30] kauditd_printk_skb: 87 callbacks suppressed
	[ +12.968157] kauditd_printk_skb: 10 callbacks suppressed
	[ +32.025338] kauditd_printk_skb: 88 callbacks suppressed
	[Jan20 16:35] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.838837] systemd-fstab-generator[2789]: Ignoring "noauto" option for root device
	[  +0.063716] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.503945] systemd-fstab-generator[3129]: Ignoring "noauto" option for root device
	[  +0.082837] kauditd_printk_skb: 55 callbacks suppressed
	[  +4.945131] systemd-fstab-generator[3245]: Ignoring "noauto" option for root device
	[  +0.177556] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.084143] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.484087] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [cadf22ac36f82e5a797d50957e8369589812caa476b90994963e935e738a7a7d] <==
	{"level":"info","ts":"2025-01-20T16:40:51.172514Z","caller":"traceutil/trace.go:171","msg":"trace[639520130] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:851; }","duration":"154.978085ms","start":"2025-01-20T16:40:51.017522Z","end":"2025-01-20T16:40:51.172500Z","steps":["trace[639520130] 'agreement among raft nodes before linearized reading'  (duration: 154.820868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:40:51.173755Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.872927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-ct9fc.181c73dd38c0e107\" limit:1 ","response":"range_response_count:1 size:1004"}
	{"level":"info","ts":"2025-01-20T16:40:51.173862Z","caller":"traceutil/trace.go:171","msg":"trace[1797888635] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-ct9fc.181c73dd38c0e107; range_end:; response_count:1; response_revision:851; }","duration":"164.156917ms","start":"2025-01-20T16:40:51.009690Z","end":"2025-01-20T16:40:51.173847Z","steps":["trace[1797888635] 'agreement among raft nodes before linearized reading'  (duration: 162.460144ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T16:40:54.388970Z","caller":"traceutil/trace.go:171","msg":"trace[1970256480] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"120.682683ms","start":"2025-01-20T16:40:54.268268Z","end":"2025-01-20T16:40:54.388950Z","steps":["trace[1970256480] 'process raft request'  (duration: 120.197595ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:40:54.809925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.232461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T16:40:54.810013Z","caller":"traceutil/trace.go:171","msg":"trace[1374440699] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:856; }","duration":"306.388438ms","start":"2025-01-20T16:40:54.503610Z","end":"2025-01-20T16:40:54.809999Z","steps":["trace[1374440699] 'range keys from in-memory index tree'  (duration: 306.181987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:40:54.810139Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T16:40:54.503595Z","time spent":"306.470526ms","remote":"127.0.0.1:46492","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-20T16:40:55.909269Z","caller":"traceutil/trace.go:171","msg":"trace[1153939298] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"129.594536ms","start":"2025-01-20T16:40:55.779653Z","end":"2025-01-20T16:40:55.909248Z","steps":["trace[1153939298] 'process raft request'  (duration: 129.151275ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T16:42:48.582281Z","caller":"traceutil/trace.go:171","msg":"trace[627596658] transaction","detail":"{read_only:false; response_revision:955; number_of_response:1; }","duration":"221.297318ms","start":"2025-01-20T16:42:48.360937Z","end":"2025-01-20T16:42:48.582235Z","steps":["trace[627596658] 'process raft request'  (duration: 220.83756ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:42:49.013334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.986191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T16:42:49.013432Z","caller":"traceutil/trace.go:171","msg":"trace[1255525168] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:955; }","duration":"316.194219ms","start":"2025-01-20T16:42:48.697223Z","end":"2025-01-20T16:42:49.013417Z","steps":["trace[1255525168] 'range keys from in-memory index tree'  (duration: 315.908203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:42:49.013481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T16:42:48.697208Z","time spent":"316.260294ms","remote":"127.0.0.1:46492","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T16:42:49.013710Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.05842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T16:42:49.013766Z","caller":"traceutil/trace.go:171","msg":"trace[1363084677] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:955; }","duration":"290.141843ms","start":"2025-01-20T16:42:48.723615Z","end":"2025-01-20T16:42:49.013756Z","steps":["trace[1363084677] 'range keys from in-memory index tree'  (duration: 290.010745ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T16:42:49.013978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.296471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T16:42:49.014034Z","caller":"traceutil/trace.go:171","msg":"trace[1694789841] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:955; }","duration":"235.387912ms","start":"2025-01-20T16:42:48.778628Z","end":"2025-01-20T16:42:49.014016Z","steps":["trace[1694789841] 'count revisions from in-memory index tree'  (duration: 235.234071ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T16:45:27.389361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2025-01-20T16:45:27.429753Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":830,"took":"39.860472ms","hash":1811538082,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2891776,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-20T16:45:27.430211Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1811538082,"revision":830,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T16:50:27.402535Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1082}
	{"level":"info","ts":"2025-01-20T16:50:27.407902Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1082,"took":"4.950551ms","hash":271258449,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-20T16:50:27.407980Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":271258449,"revision":1082,"compact-revision":830}
	{"level":"info","ts":"2025-01-20T16:55:27.410862Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1333}
	{"level":"info","ts":"2025-01-20T16:55:27.415970Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1333,"took":"4.452764ms","hash":3361502158,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1810432,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T16:55:27.416064Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3361502158,"revision":1333,"compact-revision":1082}
	
	
	==> kernel <==
	 16:56:05 up 26 min,  0 users,  load average: 0.27, 0.32, 0.24
	Linux embed-certs-429406 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8eff35b19ef8754b999f3b2776284917ba3fa0c5c28cccd2d61e7f9bcbcc782c] <==
	I0120 16:51:30.065930       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 16:51:30.065984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 16:53:30.066218       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 16:53:30.066373       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 16:53:30.066258       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 16:53:30.066523       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 16:53:30.067767       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 16:53:30.067955       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 16:55:29.065638       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 16:55:29.065823       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 16:55:30.067822       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 16:55:30.067969       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 16:55:30.067874       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 16:55:30.068141       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 16:55:30.069158       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 16:55:30.069290       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d114d69c857daf755f5a8acd085a88ddf9574fb3b0b305bb67b03adffb9c2e04] <==
	W0120 16:35:21.701577       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:21.720517       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:21.780683       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:21.797813       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:21.946428       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:21.993751       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.026968       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.079230       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.093253       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.096444       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.127493       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.144942       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.178310       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.189965       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.198687       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.277699       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.317902       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.323673       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.345409       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.398722       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.414349       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.422274       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.589266       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:22.911215       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 16:35:23.036754       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1e0281bdd0c238e081dc73927ae464a36674e076b794d7b851f3b2a1240d2942] <==
	E0120 16:51:05.834449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:51:05.949807       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 16:51:12.112800       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-429406"
	I0120 16:51:34.283765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="383.41µs"
	E0120 16:51:35.841214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:51:35.959591       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 16:51:42.127149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="193.663µs"
	I0120 16:51:46.026353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="77.8µs"
	I0120 16:51:58.023462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="180.972µs"
	E0120 16:52:05.848059       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:52:05.968009       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:52:35.855524       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:52:35.975608       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:53:05.861662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:53:05.983490       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:53:35.868566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:53:35.995699       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:54:05.875013       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:54:06.004572       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:54:35.882196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:54:36.013674       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:55:05.890272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:55:06.024687       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 16:55:35.897176       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 16:55:36.044402       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [88053dbaefe69bc9206f12d229ec50296fbcc1afff68b29b0df4fb28d0b30c59] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 16:35:37.638529       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 16:35:37.672778       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.123"]
	E0120 16:35:37.672863       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 16:35:37.792302       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 16:35:37.792362       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 16:35:37.792392       1 server_linux.go:170] "Using iptables Proxier"
	I0120 16:35:37.796745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 16:35:37.797121       1 server.go:497] "Version info" version="v1.32.0"
	I0120 16:35:37.797152       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 16:35:37.798817       1 config.go:329] "Starting node config controller"
	I0120 16:35:37.798836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 16:35:37.800648       1 config.go:199] "Starting service config controller"
	I0120 16:35:37.800682       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 16:35:37.800713       1 config.go:105] "Starting endpoint slice config controller"
	I0120 16:35:37.800717       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 16:35:37.899818       1 shared_informer.go:320] Caches are synced for node config
	I0120 16:35:37.901135       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 16:35:37.901171       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3ea3fd1d65b4bcd5807b15b5a5c91aed1d7fdce2dbef517697fef3263b74295e] <==
	W0120 16:35:29.076277       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 16:35:29.076321       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.076618       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 16:35:29.076656       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.886739       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 16:35:29.886773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.916177       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 16:35:29.916282       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.957217       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 16:35:29.957280       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.961194       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 16:35:29.961225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:29.998479       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 16:35:29.998530       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:30.010247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 16:35:30.010299       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:30.029542       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 16:35:30.029597       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:30.192229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 16:35:30.192280       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:30.318396       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 16:35:30.318461       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 16:35:30.648328       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 16:35:30.648399       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0120 16:35:32.668403       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 16:55:22 embed-certs-429406 kubelet[3136]: E0120 16:55:22.359944    3136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392122359611881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:22 embed-certs-429406 kubelet[3136]: E0120 16:55:22.360392    3136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392122359611881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:25 embed-certs-429406 kubelet[3136]: E0120 16:55:25.005502    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-qnvqf" podUID="2082e56a-aa58-49b4-8a5b-6b3896224219"
	Jan 20 16:55:26 embed-certs-429406 kubelet[3136]: I0120 16:55:26.008371    3136 scope.go:117] "RemoveContainer" containerID="d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f"
	Jan 20 16:55:26 embed-certs-429406 kubelet[3136]: E0120 16:55:26.008569    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-ct9fc_kubernetes-dashboard(54237406-002d-410e-a0a9-1881bfed567c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-ct9fc" podUID="54237406-002d-410e-a0a9-1881bfed567c"
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]: E0120 16:55:32.029368    3136 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]: E0120 16:55:32.362226    3136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392132361910629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:32 embed-certs-429406 kubelet[3136]: E0120 16:55:32.362275    3136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392132361910629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:38 embed-certs-429406 kubelet[3136]: I0120 16:55:38.005148    3136 scope.go:117] "RemoveContainer" containerID="d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f"
	Jan 20 16:55:38 embed-certs-429406 kubelet[3136]: E0120 16:55:38.005735    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-ct9fc_kubernetes-dashboard(54237406-002d-410e-a0a9-1881bfed567c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-ct9fc" podUID="54237406-002d-410e-a0a9-1881bfed567c"
	Jan 20 16:55:39 embed-certs-429406 kubelet[3136]: E0120 16:55:39.006220    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-qnvqf" podUID="2082e56a-aa58-49b4-8a5b-6b3896224219"
	Jan 20 16:55:42 embed-certs-429406 kubelet[3136]: E0120 16:55:42.364396    3136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392142363449421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:42 embed-certs-429406 kubelet[3136]: E0120 16:55:42.364441    3136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392142363449421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:51 embed-certs-429406 kubelet[3136]: I0120 16:55:51.004529    3136 scope.go:117] "RemoveContainer" containerID="d9c4e9afdba4aa502294c17dbfd35334dd50f69c5b885081ad2785490cabe75f"
	Jan 20 16:55:51 embed-certs-429406 kubelet[3136]: E0120 16:55:51.005168    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-ct9fc_kubernetes-dashboard(54237406-002d-410e-a0a9-1881bfed567c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-ct9fc" podUID="54237406-002d-410e-a0a9-1881bfed567c"
	Jan 20 16:55:52 embed-certs-429406 kubelet[3136]: E0120 16:55:52.007600    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-qnvqf" podUID="2082e56a-aa58-49b4-8a5b-6b3896224219"
	Jan 20 16:55:52 embed-certs-429406 kubelet[3136]: E0120 16:55:52.366908    3136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392152366437358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:55:52 embed-certs-429406 kubelet[3136]: E0120 16:55:52.366996    3136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392152366437358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:56:02 embed-certs-429406 kubelet[3136]: E0120 16:56:02.368981    3136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392162368490968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:56:02 embed-certs-429406 kubelet[3136]: E0120 16:56:02.369014    3136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392162368490968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 16:56:04 embed-certs-429406 kubelet[3136]: E0120 16:56:04.007933    3136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-qnvqf" podUID="2082e56a-aa58-49b4-8a5b-6b3896224219"
	
	
	==> kubernetes-dashboard [5fd1c3d85fe37e2ed60cb00aa0f496d41aac79bcf48496a8525f24b1ca62b821] <==
	2025/01/20 16:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:44:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:44:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:45:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:45:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:48:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:49:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:49:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:50:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:50:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:51:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:51:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:52:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:53:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:53:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:54:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:54:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:55:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 16:55:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41a80745e14f06233180d59108cb266aefd3f6139ca2a3f30ad558609407c843] <==
	I0120 16:35:39.528700       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 16:35:39.541391       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 16:35:39.541749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 16:35:39.562793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 16:35:39.566970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0088e840-523e-4fbb-a768-18657ef808ca", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-429406_7517b60d-b707-422d-b297-0b6708d6c4e9 became leader
	I0120 16:35:39.569032       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-429406_7517b60d-b707-422d-b297-0b6708d6c4e9!
	I0120 16:35:39.670240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-429406_7517b60d-b707-422d-b297-0b6708d6c4e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-429406 -n embed-certs-429406
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-429406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-qnvqf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-429406 describe pod metrics-server-f79f97bbb-qnvqf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-429406 describe pod metrics-server-f79f97bbb-qnvqf: exit status 1 (68.837013ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-qnvqf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-429406 describe pod metrics-server-f79f97bbb-qnvqf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1600.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-806597 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-806597 create -f testdata/busybox.yaml: exit status 1 (65.23615ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-806597" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-806597 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 6 (316.391326ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:30:15.942799 2183917 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-806597" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-806597" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 6 (287.194483ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:30:16.229990 2183947 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-806597" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-806597" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-806597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-806597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.072479105s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-806597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-806597 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-806597 describe deploy/metrics-server -n kube-system: exit status 1 (52.941584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-806597" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-806597 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 6 (257.272255ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:32:01.618640 2184614 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-806597" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-806597" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0120 16:32:14.249584 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:34:45.663559 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m31.145417966s)

                                                
                                                
-- stdout --
	* [old-k8s-version-806597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-806597" primary control-plane node in "old-k8s-version-806597" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-806597" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:32:06.214638 2184738 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:32:06.214766 2184738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:32:06.214776 2184738 out.go:358] Setting ErrFile to fd 2...
	I0120 16:32:06.214780 2184738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:32:06.215027 2184738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:32:06.215627 2184738 out.go:352] Setting JSON to false
	I0120 16:32:06.216724 2184738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29672,"bootTime":1737361054,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:32:06.216843 2184738 start.go:139] virtualization: kvm guest
	I0120 16:32:06.219112 2184738 out.go:177] * [old-k8s-version-806597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:32:06.220375 2184738 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:32:06.220373 2184738 notify.go:220] Checking for updates...
	I0120 16:32:06.222664 2184738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:32:06.224071 2184738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:32:06.225349 2184738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:32:06.226710 2184738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:32:06.228023 2184738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:32:06.229823 2184738 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:32:06.230564 2184738 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:32:06.230689 2184738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:32:06.248527 2184738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0120 16:32:06.249012 2184738 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:32:06.249715 2184738 main.go:141] libmachine: Using API Version  1
	I0120 16:32:06.249738 2184738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:32:06.250072 2184738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:32:06.250268 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:06.252165 2184738 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 16:32:06.253490 2184738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:32:06.253865 2184738 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:32:06.253920 2184738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:32:06.270664 2184738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0120 16:32:06.271224 2184738 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:32:06.271773 2184738 main.go:141] libmachine: Using API Version  1
	I0120 16:32:06.271802 2184738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:32:06.272127 2184738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:32:06.272394 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:06.310687 2184738 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 16:32:06.312010 2184738 start.go:297] selected driver: kvm2
	I0120 16:32:06.312033 2184738 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
06597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:32:06.312161 2184738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:32:06.312871 2184738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:32:06.312966 2184738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:32:06.329112 2184738 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:32:06.329552 2184738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:32:06.329590 2184738 cni.go:84] Creating CNI manager for ""
	I0120 16:32:06.329655 2184738 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:32:06.329703 2184738 start.go:340] cluster config:
	{Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:32:06.329844 2184738 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:32:06.331798 2184738 out.go:177] * Starting "old-k8s-version-806597" primary control-plane node in "old-k8s-version-806597" cluster
	I0120 16:32:06.333060 2184738 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:32:06.333102 2184738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:32:06.333110 2184738 cache.go:56] Caching tarball of preloaded images
	I0120 16:32:06.333207 2184738 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:32:06.333223 2184738 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 16:32:06.333349 2184738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json ...
	I0120 16:32:06.333579 2184738 start.go:360] acquireMachinesLock for old-k8s-version-806597: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:32:06.333629 2184738 start.go:364] duration metric: took 28.372µs to acquireMachinesLock for "old-k8s-version-806597"
	I0120 16:32:06.333646 2184738 start.go:96] Skipping create...Using existing machine configuration
	I0120 16:32:06.333656 2184738 fix.go:54] fixHost starting: 
	I0120 16:32:06.334079 2184738 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:32:06.334125 2184738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:32:06.350127 2184738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0120 16:32:06.350592 2184738 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:32:06.351141 2184738 main.go:141] libmachine: Using API Version  1
	I0120 16:32:06.351171 2184738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:32:06.351563 2184738 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:32:06.351787 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:06.351938 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetState
	I0120 16:32:06.353789 2184738 fix.go:112] recreateIfNeeded on old-k8s-version-806597: state=Stopped err=<nil>
	I0120 16:32:06.353813 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	W0120 16:32:06.353993 2184738 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 16:32:06.355925 2184738 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-806597" ...
	I0120 16:32:06.357041 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .Start
	I0120 16:32:06.357264 2184738 main.go:141] libmachine: (old-k8s-version-806597) starting domain...
	I0120 16:32:06.357286 2184738 main.go:141] libmachine: (old-k8s-version-806597) ensuring networks are active...
	I0120 16:32:06.358061 2184738 main.go:141] libmachine: (old-k8s-version-806597) Ensuring network default is active
	I0120 16:32:06.358421 2184738 main.go:141] libmachine: (old-k8s-version-806597) Ensuring network mk-old-k8s-version-806597 is active
	I0120 16:32:06.358911 2184738 main.go:141] libmachine: (old-k8s-version-806597) getting domain XML...
	I0120 16:32:06.359748 2184738 main.go:141] libmachine: (old-k8s-version-806597) creating domain...
	I0120 16:32:07.730376 2184738 main.go:141] libmachine: (old-k8s-version-806597) waiting for IP...
	I0120 16:32:07.731389 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:07.731817 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:07.731883 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:07.731808 2184783 retry.go:31] will retry after 299.911673ms: waiting for domain to come up
	I0120 16:32:08.033374 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:08.033930 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:08.033956 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:08.033899 2184783 retry.go:31] will retry after 244.01259ms: waiting for domain to come up
	I0120 16:32:08.279613 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:08.280036 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:08.280096 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:08.279992 2184783 retry.go:31] will retry after 476.069749ms: waiting for domain to come up
	I0120 16:32:08.757254 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:08.757797 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:08.757825 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:08.757776 2184783 retry.go:31] will retry after 436.534622ms: waiting for domain to come up
	I0120 16:32:09.196481 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:09.197007 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:09.197045 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:09.196969 2184783 retry.go:31] will retry after 620.959744ms: waiting for domain to come up
	I0120 16:32:09.819881 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:09.820342 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:09.820374 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:09.820306 2184783 retry.go:31] will retry after 803.464789ms: waiting for domain to come up
	I0120 16:32:10.625707 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:10.626316 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:10.626357 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:10.626263 2184783 retry.go:31] will retry after 1.173388842s: waiting for domain to come up
	I0120 16:32:11.801194 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:11.801789 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:11.801817 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:11.801770 2184783 retry.go:31] will retry after 1.437032649s: waiting for domain to come up
	I0120 16:32:13.241068 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:13.241699 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:13.241729 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:13.241648 2184783 retry.go:31] will retry after 1.789487026s: waiting for domain to come up
	I0120 16:32:15.033731 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:15.034364 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:15.034395 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:15.034310 2184783 retry.go:31] will retry after 2.241433526s: waiting for domain to come up
	I0120 16:32:17.277872 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:17.278456 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:17.278486 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:17.278408 2184783 retry.go:31] will retry after 2.569800552s: waiting for domain to come up
	I0120 16:32:19.850133 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:19.850692 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:19.850726 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:19.850659 2184783 retry.go:31] will retry after 2.428336032s: waiting for domain to come up
	I0120 16:32:22.280638 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:22.281244 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | unable to find current IP address of domain old-k8s-version-806597 in network mk-old-k8s-version-806597
	I0120 16:32:22.281293 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | I0120 16:32:22.281205 2184783 retry.go:31] will retry after 4.046356321s: waiting for domain to come up
	I0120 16:32:26.332477 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.333218 2184738 main.go:141] libmachine: (old-k8s-version-806597) found domain IP: 192.168.50.241
	I0120 16:32:26.333283 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has current primary IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.333298 2184738 main.go:141] libmachine: (old-k8s-version-806597) reserving static IP address...
	I0120 16:32:26.333756 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "old-k8s-version-806597", mac: "52:54:00:02:1a:c1", ip: "192.168.50.241"} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.333796 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | skip adding static IP to network mk-old-k8s-version-806597 - found existing host DHCP lease matching {name: "old-k8s-version-806597", mac: "52:54:00:02:1a:c1", ip: "192.168.50.241"}
	I0120 16:32:26.333825 2184738 main.go:141] libmachine: (old-k8s-version-806597) reserved static IP address 192.168.50.241 for domain old-k8s-version-806597
	I0120 16:32:26.333849 2184738 main.go:141] libmachine: (old-k8s-version-806597) waiting for SSH...
	I0120 16:32:26.333860 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | Getting to WaitForSSH function...
	I0120 16:32:26.336532 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.336881 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.336917 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.337051 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH client type: external
	I0120 16:32:26.337076 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa (-rw-------)
	I0120 16:32:26.337123 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:32:26.337135 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | About to run SSH command:
	I0120 16:32:26.337146 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | exit 0
	I0120 16:32:26.459423 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | SSH cmd err, output: <nil>: 
	I0120 16:32:26.459839 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetConfigRaw
	I0120 16:32:26.460535 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:32:26.463316 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.463805 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.463835 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.464196 2184738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/config.json ...
	I0120 16:32:26.464463 2184738 machine.go:93] provisionDockerMachine start ...
	I0120 16:32:26.464487 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:26.464734 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:26.467344 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.467703 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.467736 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.467871 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:26.468076 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.468272 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.468473 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:26.468661 2184738 main.go:141] libmachine: Using SSH client type: native
	I0120 16:32:26.468953 2184738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:32:26.468974 2184738 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 16:32:26.571309 2184738 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 16:32:26.571339 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:32:26.571619 2184738 buildroot.go:166] provisioning hostname "old-k8s-version-806597"
	I0120 16:32:26.571666 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:32:26.571883 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:26.574747 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.575156 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.575188 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.575356 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:26.575564 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.575747 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.575941 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:26.576166 2184738 main.go:141] libmachine: Using SSH client type: native
	I0120 16:32:26.576355 2184738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:32:26.576372 2184738 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-806597 && echo "old-k8s-version-806597" | sudo tee /etc/hostname
	I0120 16:32:26.691356 2184738 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-806597
	
	I0120 16:32:26.691392 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:26.694558 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.694976 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.695017 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.695263 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:26.695483 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.695695 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.695851 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:26.695983 2184738 main.go:141] libmachine: Using SSH client type: native
	I0120 16:32:26.696225 2184738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:32:26.696251 2184738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-806597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-806597/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-806597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:32:26.805150 2184738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:32:26.805188 2184738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:32:26.805218 2184738 buildroot.go:174] setting up certificates
	I0120 16:32:26.805230 2184738 provision.go:84] configureAuth start
	I0120 16:32:26.805243 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetMachineName
	I0120 16:32:26.805541 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:32:26.808757 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.809065 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.809097 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.809308 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:26.811762 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.812120 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.812169 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.812283 2184738 provision.go:143] copyHostCerts
	I0120 16:32:26.812360 2184738 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:32:26.812383 2184738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:32:26.812474 2184738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:32:26.812605 2184738 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:32:26.812618 2184738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:32:26.812654 2184738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:32:26.812743 2184738 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:32:26.812753 2184738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:32:26.812785 2184738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:32:26.812868 2184738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-806597 san=[127.0.0.1 192.168.50.241 localhost minikube old-k8s-version-806597]
	I0120 16:32:26.906123 2184738 provision.go:177] copyRemoteCerts
	I0120 16:32:26.906184 2184738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:32:26.906215 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:26.909372 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.909759 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:26.909798 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:26.910058 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:26.910285 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:26.910502 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:26.910666 2184738 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:32:26.990168 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 16:32:27.020029 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:32:27.049828 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 16:32:27.078897 2184738 provision.go:87] duration metric: took 273.647618ms to configureAuth
	I0120 16:32:27.078949 2184738 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:32:27.079212 2184738 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:32:27.079321 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:27.083136 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.083566 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.083606 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.084187 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:27.084471 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.084709 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.084888 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:27.085126 2184738 main.go:141] libmachine: Using SSH client type: native
	I0120 16:32:27.085377 2184738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:32:27.085403 2184738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:32:27.317878 2184738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:32:27.317913 2184738 machine.go:96] duration metric: took 853.433772ms to provisionDockerMachine
	I0120 16:32:27.317930 2184738 start.go:293] postStartSetup for "old-k8s-version-806597" (driver="kvm2")
	I0120 16:32:27.317943 2184738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:32:27.317979 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:27.318367 2184738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:32:27.318403 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:27.321550 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.321957 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.322021 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.322115 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:27.322346 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.322514 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:27.322709 2184738 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:32:27.402899 2184738 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:32:27.408095 2184738 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:32:27.408124 2184738 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:32:27.408211 2184738 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:32:27.408352 2184738 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:32:27.408497 2184738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:32:27.418800 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:32:27.445845 2184738 start.go:296] duration metric: took 127.894166ms for postStartSetup
	I0120 16:32:27.445906 2184738 fix.go:56] duration metric: took 21.112248873s for fixHost
	I0120 16:32:27.445938 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:27.449401 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.449892 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.449922 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.450212 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:27.450460 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.450683 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.450869 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:27.451103 2184738 main.go:141] libmachine: Using SSH client type: native
	I0120 16:32:27.451302 2184738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I0120 16:32:27.451313 2184738 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:32:27.551809 2184738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737390747.524277935
	
	I0120 16:32:27.551835 2184738 fix.go:216] guest clock: 1737390747.524277935
	I0120 16:32:27.551842 2184738 fix.go:229] Guest: 2025-01-20 16:32:27.524277935 +0000 UTC Remote: 2025-01-20 16:32:27.445911793 +0000 UTC m=+21.272502208 (delta=78.366142ms)
	I0120 16:32:27.551882 2184738 fix.go:200] guest clock delta is within tolerance: 78.366142ms
	I0120 16:32:27.551889 2184738 start.go:83] releasing machines lock for "old-k8s-version-806597", held for 21.218252887s
	I0120 16:32:27.551915 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:27.552215 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:32:27.555752 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.556250 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.556281 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.556469 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:27.557118 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:27.557331 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .DriverName
	I0120 16:32:27.557420 2184738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:32:27.557470 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:27.557652 2184738 ssh_runner.go:195] Run: cat /version.json
	I0120 16:32:27.557683 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHHostname
	I0120 16:32:27.560470 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.560811 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.560886 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.560905 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.561092 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:27.561193 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:27.561241 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:27.561266 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.561413 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHPort
	I0120 16:32:27.561507 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:27.561609 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHKeyPath
	I0120 16:32:27.561677 2184738 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:32:27.561801 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetSSHUsername
	I0120 16:32:27.561958 2184738 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/old-k8s-version-806597/id_rsa Username:docker}
	I0120 16:32:27.635776 2184738 ssh_runner.go:195] Run: systemctl --version
	I0120 16:32:27.668565 2184738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:32:27.821438 2184738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:32:27.827668 2184738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:32:27.827749 2184738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:32:27.847463 2184738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:32:27.847497 2184738 start.go:495] detecting cgroup driver to use...
	I0120 16:32:27.847594 2184738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:32:27.864886 2184738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:32:27.879813 2184738 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:32:27.879928 2184738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:32:27.895709 2184738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:32:27.914461 2184738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:32:28.046223 2184738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:32:28.225844 2184738 docker.go:233] disabling docker service ...
	I0120 16:32:28.225927 2184738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:32:28.247599 2184738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:32:28.263085 2184738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:32:28.410107 2184738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:32:28.543464 2184738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:32:28.558955 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:32:28.579318 2184738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 16:32:28.579405 2184738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:32:28.590817 2184738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:32:28.590906 2184738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:32:28.602974 2184738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:32:28.615237 2184738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:32:28.627974 2184738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:32:28.639743 2184738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:32:28.650557 2184738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:32:28.650658 2184738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:32:28.665245 2184738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:32:28.676113 2184738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:32:28.797956 2184738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:32:28.914701 2184738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:32:28.914798 2184738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:32:28.920420 2184738 start.go:563] Will wait 60s for crictl version
	I0120 16:32:28.920481 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:28.924826 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:32:28.975411 2184738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:32:28.975514 2184738 ssh_runner.go:195] Run: crio --version
	I0120 16:32:29.005972 2184738 ssh_runner.go:195] Run: crio --version
	I0120 16:32:29.042142 2184738 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 16:32:29.043664 2184738 main.go:141] libmachine: (old-k8s-version-806597) Calling .GetIP
	I0120 16:32:29.046437 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:29.046877 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:1a:c1", ip: ""} in network mk-old-k8s-version-806597: {Iface:virbr2 ExpiryTime:2025-01-20 17:32:18 +0000 UTC Type:0 Mac:52:54:00:02:1a:c1 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:old-k8s-version-806597 Clientid:01:52:54:00:02:1a:c1}
	I0120 16:32:29.046930 2184738 main.go:141] libmachine: (old-k8s-version-806597) DBG | domain old-k8s-version-806597 has defined IP address 192.168.50.241 and MAC address 52:54:00:02:1a:c1 in network mk-old-k8s-version-806597
	I0120 16:32:29.047383 2184738 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 16:32:29.052661 2184738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:32:29.067497 2184738 kubeadm.go:883] updating cluster {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:32:29.067618 2184738 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 16:32:29.067668 2184738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:32:29.125944 2184738 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:32:29.126046 2184738 ssh_runner.go:195] Run: which lz4
	I0120 16:32:29.131060 2184738 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:32:29.135612 2184738 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:32:29.135649 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 16:32:31.058733 2184738 crio.go:462] duration metric: took 1.92770993s to copy over tarball
	I0120 16:32:31.058823 2184738 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:32:34.288881 2184738 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.230026229s)
	I0120 16:32:34.288914 2184738 crio.go:469] duration metric: took 3.230143192s to extract the tarball
	I0120 16:32:34.288925 2184738 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:32:34.335173 2184738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:32:34.373864 2184738 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 16:32:34.373899 2184738 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 16:32:34.374019 2184738 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.374088 2184738 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.374140 2184738 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:34.374187 2184738 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.374120 2184738 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 16:32:34.374000 2184738 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.373992 2184738 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:32:34.374306 2184738 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.378146 2184738 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.378151 2184738 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:34.378396 2184738 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.378414 2184738 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.378771 2184738 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.378893 2184738 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 16:32:34.378906 2184738 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:32:34.378916 2184738 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.539314 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.572191 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.580003 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.591702 2184738 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 16:32:34.591770 2184738 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.591845 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.600742 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.608867 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.619293 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:34.624410 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 16:32:34.666457 2184738 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 16:32:34.666484 2184738 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 16:32:34.666520 2184738 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.666526 2184738 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.666544 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.666566 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.666567 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.710800 2184738 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 16:32:34.710847 2184738 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.710903 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.754923 2184738 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 16:32:34.754989 2184738 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.755046 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.757577 2184738 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 16:32:34.757615 2184738 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:34.757663 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.769758 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.769767 2184738 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 16:32:34.769878 2184738 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 16:32:34.769931 2184738 ssh_runner.go:195] Run: which crictl
	I0120 16:32:34.786579 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.786644 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.786654 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:34.786579 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.786584 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.836534 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:32:34.836593 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:34.898371 2184738 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:32:34.955357 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:34.955404 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 16:32:34.980950 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:34.981015 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:34.981083 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:35.002924 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:32:35.010914 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 16:32:35.233802 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 16:32:35.233889 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 16:32:35.233809 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 16:32:35.233925 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 16:32:35.234000 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 16:32:35.234013 2184738 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 16:32:35.234050 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 16:32:35.365661 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 16:32:35.365731 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 16:32:35.370642 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 16:32:35.370701 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 16:32:35.370758 2184738 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 16:32:35.370813 2184738 cache_images.go:92] duration metric: took 996.896404ms to LoadCachedImages
	W0120 16:32:35.370900 2184738 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0120 16:32:35.370916 2184738 kubeadm.go:934] updating node { 192.168.50.241 8443 v1.20.0 crio true true} ...
	I0120 16:32:35.371037 2184738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-806597 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 16:32:35.371119 2184738 ssh_runner.go:195] Run: crio config
	I0120 16:32:35.425548 2184738 cni.go:84] Creating CNI manager for ""
	I0120 16:32:35.425582 2184738 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 16:32:35.425596 2184738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:32:35.425623 2184738 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.241 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-806597 NodeName:old-k8s-version-806597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 16:32:35.425807 2184738 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-806597"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:32:35.425889 2184738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 16:32:35.436860 2184738 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:32:35.436936 2184738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:32:35.447046 2184738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 16:32:35.466349 2184738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:32:35.484929 2184738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 16:32:35.508282 2184738 ssh_runner.go:195] Run: grep 192.168.50.241	control-plane.minikube.internal$ /etc/hosts
	I0120 16:32:35.512850 2184738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:32:35.528716 2184738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:32:35.646760 2184738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:32:35.665872 2184738 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597 for IP: 192.168.50.241
	I0120 16:32:35.665918 2184738 certs.go:194] generating shared ca certs ...
	I0120 16:32:35.665942 2184738 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:32:35.666181 2184738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:32:35.666249 2184738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:32:35.666263 2184738 certs.go:256] generating profile certs ...
	I0120 16:32:35.666370 2184738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/client.key
	I0120 16:32:35.666416 2184738 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key.72816fb1
	I0120 16:32:35.666452 2184738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key
	I0120 16:32:35.666560 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:32:35.666587 2184738 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:32:35.666597 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:32:35.666655 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:32:35.666689 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:32:35.666719 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:32:35.666770 2184738 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:32:35.667475 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:32:35.710644 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:32:35.749618 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:32:35.790496 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:32:35.825685 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 16:32:35.873058 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 16:32:35.913132 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:32:35.958361 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/old-k8s-version-806597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 16:32:36.000128 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:32:36.028123 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:32:36.053685 2184738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:32:36.079623 2184738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:32:36.099098 2184738 ssh_runner.go:195] Run: openssl version
	I0120 16:32:36.105385 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:32:36.117792 2184738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:32:36.123429 2184738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:32:36.123507 2184738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:32:36.131776 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:32:36.147115 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:32:36.160835 2184738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:32:36.165750 2184738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:32:36.165810 2184738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:32:36.172468 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:32:36.185108 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:32:36.198188 2184738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:32:36.203584 2184738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:32:36.203662 2184738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:32:36.210218 2184738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:32:36.224709 2184738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:32:36.230019 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 16:32:36.236803 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 16:32:36.243792 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 16:32:36.251221 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 16:32:36.257790 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 16:32:36.264250 2184738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 16:32:36.270738 2184738 kubeadm.go:392] StartCluster: {Name:old-k8s-version-806597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-806597 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:32:36.270878 2184738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:32:36.270944 2184738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:32:36.312881 2184738 cri.go:89] found id: ""
	I0120 16:32:36.312977 2184738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:32:36.325274 2184738 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 16:32:36.325300 2184738 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 16:32:36.325374 2184738 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 16:32:36.338832 2184738 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 16:32:36.339653 2184738 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-806597" does not appear in /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:32:36.340095 2184738 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2129584/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-806597" cluster setting kubeconfig missing "old-k8s-version-806597" context setting]
	I0120 16:32:36.340806 2184738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:32:36.342492 2184738 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 16:32:36.355173 2184738 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.241
	I0120 16:32:36.355239 2184738 kubeadm.go:1160] stopping kube-system containers ...
	I0120 16:32:36.355259 2184738 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 16:32:36.355325 2184738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:32:36.401565 2184738 cri.go:89] found id: ""
	I0120 16:32:36.401695 2184738 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 16:32:36.425464 2184738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:32:36.436943 2184738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:32:36.436964 2184738 kubeadm.go:157] found existing configuration files:
	
	I0120 16:32:36.437034 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:32:36.447681 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:32:36.447766 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:32:36.459265 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:32:36.469685 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:32:36.469755 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:32:36.481524 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:32:36.492185 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:32:36.492274 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:32:36.503748 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:32:36.514170 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:32:36.514247 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:32:36.525319 2184738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:32:36.537202 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:32:36.850083 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:32:37.684346 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:32:37.923108 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:32:38.049233 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 16:32:38.163145 2184738 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:32:38.163251 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:38.664387 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:39.163812 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:39.663482 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:40.163504 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:40.663612 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:41.163592 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:41.663494 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:42.163856 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:42.663894 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:43.163987 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:43.663384 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:44.163616 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:44.663953 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:45.163356 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:45.663320 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:46.163846 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:46.664333 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:47.163835 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:47.663414 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:48.164051 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:48.663874 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:49.163940 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:49.663464 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:50.163651 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:50.664185 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:51.164283 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:51.663595 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:52.163330 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:52.663400 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:53.163527 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:53.663841 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:54.164066 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:54.664097 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:55.163539 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:55.663709 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:56.163431 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:56.664162 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:57.163638 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:57.663428 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:58.163868 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:58.663452 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:59.163345 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:32:59.663848 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:00.164144 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:00.663843 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:01.163773 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:01.663833 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:02.164385 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:02.664094 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:03.163491 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:03.663772 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:04.164318 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:04.663359 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:05.163618 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:05.664203 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:06.163417 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:06.663713 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:07.163940 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:07.663972 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:08.163902 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:08.663903 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:09.163718 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:09.663845 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:10.163835 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:10.664130 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:11.163885 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:11.664292 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:12.163524 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:12.664359 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:13.163506 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:13.663730 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:14.163815 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:14.663760 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:15.163865 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:15.664124 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:16.163393 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:16.663532 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:17.164277 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:17.663940 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:18.163611 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:18.663879 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:19.163979 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:19.663880 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:20.163450 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:20.663604 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:21.163355 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:21.663393 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:22.163318 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:22.663800 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:23.163398 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:23.663902 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:24.163908 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:24.663493 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:25.163619 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:25.663821 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:26.163535 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:26.663789 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:27.163815 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:27.664116 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:28.163641 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:28.663745 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:29.163607 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:29.663887 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:30.163503 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:30.663800 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:31.163966 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:31.663917 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:32.163718 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:32.664163 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:33.163846 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:33.664349 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:34.163781 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:34.663655 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:35.163477 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:35.663487 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:36.164108 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:36.663855 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:37.163585 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:37.664150 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:38.163476 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:38.163571 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:38.206794 2184738 cri.go:89] found id: ""
	I0120 16:33:38.206834 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.206847 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:38.206856 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:38.206927 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:38.248720 2184738 cri.go:89] found id: ""
	I0120 16:33:38.248760 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.248773 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:38.248784 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:38.248871 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:38.285640 2184738 cri.go:89] found id: ""
	I0120 16:33:38.285677 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.285689 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:38.285697 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:38.285765 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:38.328361 2184738 cri.go:89] found id: ""
	I0120 16:33:38.328402 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.328414 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:38.328420 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:38.328501 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:38.365360 2184738 cri.go:89] found id: ""
	I0120 16:33:38.365397 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.365408 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:38.365415 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:38.365498 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:38.403418 2184738 cri.go:89] found id: ""
	I0120 16:33:38.403453 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.403465 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:38.403474 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:38.403543 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:38.441051 2184738 cri.go:89] found id: ""
	I0120 16:33:38.441090 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.441102 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:38.441111 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:38.441187 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:38.479099 2184738 cri.go:89] found id: ""
	I0120 16:33:38.479129 2184738 logs.go:282] 0 containers: []
	W0120 16:33:38.479138 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:38.479149 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:38.479160 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:38.534931 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:38.534973 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:38.549739 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:38.549775 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:38.697938 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:38.697965 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:38.697980 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:38.772478 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:38.772526 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:41.322170 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:41.341350 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:41.341426 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:41.380847 2184738 cri.go:89] found id: ""
	I0120 16:33:41.380881 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.380890 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:41.380912 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:41.380972 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:41.419024 2184738 cri.go:89] found id: ""
	I0120 16:33:41.419063 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.419088 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:41.419097 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:41.419157 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:41.458001 2184738 cri.go:89] found id: ""
	I0120 16:33:41.458060 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.458091 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:41.458098 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:41.458156 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:41.495607 2184738 cri.go:89] found id: ""
	I0120 16:33:41.495638 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.495647 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:41.495654 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:41.495734 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:41.536594 2184738 cri.go:89] found id: ""
	I0120 16:33:41.536627 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.536636 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:41.536643 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:41.536710 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:41.577788 2184738 cri.go:89] found id: ""
	I0120 16:33:41.577826 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.577838 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:41.577846 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:41.577936 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:41.626917 2184738 cri.go:89] found id: ""
	I0120 16:33:41.626956 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.626967 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:41.626975 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:41.627058 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:41.664273 2184738 cri.go:89] found id: ""
	I0120 16:33:41.664303 2184738 logs.go:282] 0 containers: []
	W0120 16:33:41.664319 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:41.664331 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:41.664347 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:41.718474 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:41.718521 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:41.733913 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:41.733946 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:41.814843 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:41.814868 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:41.814883 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:41.890025 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:41.890071 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:44.435113 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:44.449007 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:44.449125 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:44.485675 2184738 cri.go:89] found id: ""
	I0120 16:33:44.485707 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.485719 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:44.485727 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:44.485802 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:44.527133 2184738 cri.go:89] found id: ""
	I0120 16:33:44.527161 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.527169 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:44.527175 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:44.527236 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:44.576727 2184738 cri.go:89] found id: ""
	I0120 16:33:44.576754 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.576785 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:44.576794 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:44.576868 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:44.619944 2184738 cri.go:89] found id: ""
	I0120 16:33:44.619977 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.619987 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:44.619995 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:44.620065 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:44.658463 2184738 cri.go:89] found id: ""
	I0120 16:33:44.658498 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.658512 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:44.658520 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:44.658590 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:44.696884 2184738 cri.go:89] found id: ""
	I0120 16:33:44.696919 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.696928 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:44.696935 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:44.696999 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:44.734870 2184738 cri.go:89] found id: ""
	I0120 16:33:44.734907 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.734920 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:44.734928 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:44.735016 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:44.773862 2184738 cri.go:89] found id: ""
	I0120 16:33:44.773899 2184738 logs.go:282] 0 containers: []
	W0120 16:33:44.773912 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:44.773927 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:44.773941 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:44.826481 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:44.826525 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:44.841260 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:44.841296 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:44.921641 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:44.921671 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:44.921689 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:45.001515 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:45.001579 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:47.544512 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:47.558076 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:47.558165 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:47.606172 2184738 cri.go:89] found id: ""
	I0120 16:33:47.606202 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.606211 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:47.606218 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:47.606289 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:47.643028 2184738 cri.go:89] found id: ""
	I0120 16:33:47.643063 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.643074 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:47.643082 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:47.643141 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:47.684546 2184738 cri.go:89] found id: ""
	I0120 16:33:47.684583 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.684593 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:47.684601 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:47.684658 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:47.725213 2184738 cri.go:89] found id: ""
	I0120 16:33:47.725249 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.725261 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:47.725270 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:47.725332 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:47.764513 2184738 cri.go:89] found id: ""
	I0120 16:33:47.764544 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.764553 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:47.764560 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:47.764619 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:47.805495 2184738 cri.go:89] found id: ""
	I0120 16:33:47.805522 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.805532 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:47.805539 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:47.805591 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:47.846843 2184738 cri.go:89] found id: ""
	I0120 16:33:47.846877 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.846890 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:47.846899 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:47.846970 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:47.882895 2184738 cri.go:89] found id: ""
	I0120 16:33:47.882932 2184738 logs.go:282] 0 containers: []
	W0120 16:33:47.882944 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:47.882958 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:47.882976 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:47.935100 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:47.935144 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:47.950114 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:47.950148 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:48.024644 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:48.024676 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:48.024695 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:48.100614 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:48.100659 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:50.649211 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:50.664756 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:50.664850 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:50.703936 2184738 cri.go:89] found id: ""
	I0120 16:33:50.703967 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.703975 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:50.704004 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:50.704061 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:50.740434 2184738 cri.go:89] found id: ""
	I0120 16:33:50.740468 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.740479 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:50.740487 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:50.740552 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:50.776543 2184738 cri.go:89] found id: ""
	I0120 16:33:50.776582 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.776595 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:50.776604 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:50.776679 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:50.811533 2184738 cri.go:89] found id: ""
	I0120 16:33:50.811562 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.811572 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:50.811578 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:50.811657 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:50.848607 2184738 cri.go:89] found id: ""
	I0120 16:33:50.848645 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.848656 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:50.848664 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:50.848731 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:50.885671 2184738 cri.go:89] found id: ""
	I0120 16:33:50.885705 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.885714 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:50.885720 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:50.885775 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:50.922779 2184738 cri.go:89] found id: ""
	I0120 16:33:50.922809 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.922817 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:50.922823 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:50.922886 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:50.963664 2184738 cri.go:89] found id: ""
	I0120 16:33:50.963704 2184738 logs.go:282] 0 containers: []
	W0120 16:33:50.963717 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:50.963731 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:50.963747 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:51.014535 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:51.014578 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:51.031078 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:51.031110 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:51.104611 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:51.104642 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:51.104658 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:51.185173 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:51.185215 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:53.730977 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:53.746923 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:53.747002 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:53.784044 2184738 cri.go:89] found id: ""
	I0120 16:33:53.784087 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.784096 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:53.784103 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:53.784164 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:53.827086 2184738 cri.go:89] found id: ""
	I0120 16:33:53.827128 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.827140 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:53.827149 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:53.827224 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:53.864611 2184738 cri.go:89] found id: ""
	I0120 16:33:53.864641 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.864649 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:53.864658 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:53.864716 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:53.899934 2184738 cri.go:89] found id: ""
	I0120 16:33:53.899965 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.899977 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:53.899986 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:53.900048 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:53.935696 2184738 cri.go:89] found id: ""
	I0120 16:33:53.935723 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.935731 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:53.935737 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:53.935791 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:53.977492 2184738 cri.go:89] found id: ""
	I0120 16:33:53.977530 2184738 logs.go:282] 0 containers: []
	W0120 16:33:53.977542 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:53.977554 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:53.977628 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:54.022417 2184738 cri.go:89] found id: ""
	I0120 16:33:54.022444 2184738 logs.go:282] 0 containers: []
	W0120 16:33:54.022455 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:54.022463 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:54.022544 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:54.060113 2184738 cri.go:89] found id: ""
	I0120 16:33:54.060147 2184738 logs.go:282] 0 containers: []
	W0120 16:33:54.060157 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:54.060167 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:54.060179 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:54.122916 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:54.122961 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:54.138728 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:54.138758 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:54.215333 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:54.215369 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:54.215389 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:54.297192 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:54.297236 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:56.841380 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:56.855467 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:56.855560 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:56.891788 2184738 cri.go:89] found id: ""
	I0120 16:33:56.891819 2184738 logs.go:282] 0 containers: []
	W0120 16:33:56.891828 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:56.891835 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:56.891898 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:33:56.928062 2184738 cri.go:89] found id: ""
	I0120 16:33:56.928098 2184738 logs.go:282] 0 containers: []
	W0120 16:33:56.928107 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:33:56.928115 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:33:56.928175 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:33:56.967102 2184738 cri.go:89] found id: ""
	I0120 16:33:56.967139 2184738 logs.go:282] 0 containers: []
	W0120 16:33:56.967148 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:33:56.967155 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:33:56.967210 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:33:57.004835 2184738 cri.go:89] found id: ""
	I0120 16:33:57.004883 2184738 logs.go:282] 0 containers: []
	W0120 16:33:57.004896 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:33:57.004904 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:33:57.004974 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:33:57.042494 2184738 cri.go:89] found id: ""
	I0120 16:33:57.042532 2184738 logs.go:282] 0 containers: []
	W0120 16:33:57.042545 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:33:57.042554 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:33:57.042653 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:33:57.084330 2184738 cri.go:89] found id: ""
	I0120 16:33:57.084372 2184738 logs.go:282] 0 containers: []
	W0120 16:33:57.084386 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:33:57.084394 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:33:57.084465 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:33:57.121990 2184738 cri.go:89] found id: ""
	I0120 16:33:57.122056 2184738 logs.go:282] 0 containers: []
	W0120 16:33:57.122069 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:33:57.122078 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:33:57.122151 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:33:57.164153 2184738 cri.go:89] found id: ""
	I0120 16:33:57.164184 2184738 logs.go:282] 0 containers: []
	W0120 16:33:57.164195 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:33:57.164208 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:33:57.164225 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:33:57.220294 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:33:57.220339 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:33:57.235457 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:33:57.235501 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:33:57.310298 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:33:57.310328 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:33:57.310347 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:33:57.385227 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:33:57.385273 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:33:59.932413 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:33:59.945829 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:33:59.945925 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:33:59.984801 2184738 cri.go:89] found id: ""
	I0120 16:33:59.984836 2184738 logs.go:282] 0 containers: []
	W0120 16:33:59.984845 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:33:59.984852 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:33:59.984908 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:00.026773 2184738 cri.go:89] found id: ""
	I0120 16:34:00.026806 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.026816 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:00.026822 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:00.026878 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:00.065158 2184738 cri.go:89] found id: ""
	I0120 16:34:00.065193 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.065205 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:00.065214 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:00.065289 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:00.104227 2184738 cri.go:89] found id: ""
	I0120 16:34:00.104264 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.104287 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:00.104295 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:00.104374 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:00.145294 2184738 cri.go:89] found id: ""
	I0120 16:34:00.145333 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.145340 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:00.145348 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:00.145415 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:00.185324 2184738 cri.go:89] found id: ""
	I0120 16:34:00.185363 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.185375 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:00.185383 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:00.185451 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:00.223702 2184738 cri.go:89] found id: ""
	I0120 16:34:00.223739 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.223748 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:00.223755 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:00.223809 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:00.261144 2184738 cri.go:89] found id: ""
	I0120 16:34:00.261177 2184738 logs.go:282] 0 containers: []
	W0120 16:34:00.261188 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:00.261199 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:00.261213 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:00.303559 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:00.303596 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:00.355546 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:00.355598 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:00.370340 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:00.370380 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:00.444903 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:00.444939 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:00.444956 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:03.021714 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:03.035957 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:03.036024 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:03.072043 2184738 cri.go:89] found id: ""
	I0120 16:34:03.072076 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.072089 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:03.072096 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:03.072167 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:03.110639 2184738 cri.go:89] found id: ""
	I0120 16:34:03.110671 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.110680 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:03.110687 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:03.110747 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:03.155838 2184738 cri.go:89] found id: ""
	I0120 16:34:03.155873 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.155886 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:03.155895 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:03.155978 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:03.196672 2184738 cri.go:89] found id: ""
	I0120 16:34:03.196720 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.196732 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:03.196741 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:03.196816 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:03.236445 2184738 cri.go:89] found id: ""
	I0120 16:34:03.236482 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.236492 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:03.236499 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:03.236570 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:03.274166 2184738 cri.go:89] found id: ""
	I0120 16:34:03.274204 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.274214 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:03.274233 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:03.274327 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:03.312613 2184738 cri.go:89] found id: ""
	I0120 16:34:03.312647 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.312659 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:03.312667 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:03.312755 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:03.349174 2184738 cri.go:89] found id: ""
	I0120 16:34:03.349216 2184738 logs.go:282] 0 containers: []
	W0120 16:34:03.349228 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:03.349241 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:03.349262 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:03.364298 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:03.364330 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:03.445320 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:03.445351 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:03.445367 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:03.527177 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:03.527215 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:03.568480 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:03.568530 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:06.134707 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:06.149471 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:06.149542 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:06.187595 2184738 cri.go:89] found id: ""
	I0120 16:34:06.187632 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.187645 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:06.187654 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:06.187726 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:06.225918 2184738 cri.go:89] found id: ""
	I0120 16:34:06.225955 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.225963 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:06.225969 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:06.226026 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:06.265039 2184738 cri.go:89] found id: ""
	I0120 16:34:06.265073 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.265082 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:06.265089 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:06.265142 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:06.304591 2184738 cri.go:89] found id: ""
	I0120 16:34:06.304628 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.304638 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:06.304645 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:06.304719 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:06.341961 2184738 cri.go:89] found id: ""
	I0120 16:34:06.341988 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.341996 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:06.342003 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:06.342067 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:06.381433 2184738 cri.go:89] found id: ""
	I0120 16:34:06.381462 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.381470 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:06.381476 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:06.381530 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:06.425218 2184738 cri.go:89] found id: ""
	I0120 16:34:06.425249 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.425257 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:06.425275 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:06.425348 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:06.464384 2184738 cri.go:89] found id: ""
	I0120 16:34:06.464413 2184738 logs.go:282] 0 containers: []
	W0120 16:34:06.464422 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:06.464432 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:06.464444 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:06.518407 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:06.518451 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:06.533656 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:06.533686 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:06.611452 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:06.611485 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:06.611502 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:06.690583 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:06.690652 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:09.246069 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:09.261826 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:09.261911 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:09.299056 2184738 cri.go:89] found id: ""
	I0120 16:34:09.299100 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.299114 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:09.299124 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:09.299200 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:09.357695 2184738 cri.go:89] found id: ""
	I0120 16:34:09.357726 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.357737 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:09.357745 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:09.357820 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:09.409699 2184738 cri.go:89] found id: ""
	I0120 16:34:09.409728 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.409736 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:09.409742 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:09.409808 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:09.450047 2184738 cri.go:89] found id: ""
	I0120 16:34:09.450078 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.450090 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:09.450099 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:09.450171 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:09.487044 2184738 cri.go:89] found id: ""
	I0120 16:34:09.487071 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.487081 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:09.487087 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:09.487143 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:09.525642 2184738 cri.go:89] found id: ""
	I0120 16:34:09.525676 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.525687 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:09.525695 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:09.525763 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:09.568532 2184738 cri.go:89] found id: ""
	I0120 16:34:09.568563 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.568574 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:09.568583 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:09.568665 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:09.605660 2184738 cri.go:89] found id: ""
	I0120 16:34:09.605696 2184738 logs.go:282] 0 containers: []
	W0120 16:34:09.605721 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:09.605735 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:09.605747 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:09.661878 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:09.661939 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:09.677325 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:09.677357 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:09.753829 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:09.753864 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:09.753882 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:09.837433 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:09.837486 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:12.385710 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:12.404410 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:12.404517 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:12.440893 2184738 cri.go:89] found id: ""
	I0120 16:34:12.440925 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.440934 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:12.440941 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:12.440995 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:12.482543 2184738 cri.go:89] found id: ""
	I0120 16:34:12.482589 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.482616 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:12.482626 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:12.482694 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:12.519698 2184738 cri.go:89] found id: ""
	I0120 16:34:12.519789 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.519804 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:12.519815 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:12.519890 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:12.557931 2184738 cri.go:89] found id: ""
	I0120 16:34:12.557970 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.557983 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:12.557993 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:12.558065 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:12.595835 2184738 cri.go:89] found id: ""
	I0120 16:34:12.595864 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.595873 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:12.595881 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:12.595946 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:12.632510 2184738 cri.go:89] found id: ""
	I0120 16:34:12.632545 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.632557 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:12.632565 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:12.632638 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:12.670847 2184738 cri.go:89] found id: ""
	I0120 16:34:12.670876 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.670884 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:12.670890 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:12.670954 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:12.708406 2184738 cri.go:89] found id: ""
	I0120 16:34:12.708435 2184738 logs.go:282] 0 containers: []
	W0120 16:34:12.708444 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:12.708454 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:12.708475 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:12.722113 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:12.722151 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:12.795111 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:12.795140 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:12.795157 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:12.870159 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:12.870205 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:12.917936 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:12.917973 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:15.471853 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:15.486364 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:15.486435 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:15.525578 2184738 cri.go:89] found id: ""
	I0120 16:34:15.525609 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.525619 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:15.525627 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:15.525695 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:15.563413 2184738 cri.go:89] found id: ""
	I0120 16:34:15.563447 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.563458 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:15.563467 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:15.563547 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:15.599981 2184738 cri.go:89] found id: ""
	I0120 16:34:15.600021 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.600030 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:15.600037 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:15.600093 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:15.640608 2184738 cri.go:89] found id: ""
	I0120 16:34:15.640640 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.640662 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:15.640671 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:15.640743 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:15.679236 2184738 cri.go:89] found id: ""
	I0120 16:34:15.679271 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.679283 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:15.679293 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:15.679435 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:15.718664 2184738 cri.go:89] found id: ""
	I0120 16:34:15.718700 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.718709 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:15.718715 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:15.718782 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:15.765889 2184738 cri.go:89] found id: ""
	I0120 16:34:15.765921 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.765933 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:15.765941 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:15.766021 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:15.813977 2184738 cri.go:89] found id: ""
	I0120 16:34:15.814016 2184738 logs.go:282] 0 containers: []
	W0120 16:34:15.814029 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:15.814043 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:15.814073 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:15.869610 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:15.869661 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:15.884198 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:15.884230 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:15.958597 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:15.958643 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:15.958660 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:16.043764 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:16.043806 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:18.594742 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:18.608533 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:18.608605 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:18.643790 2184738 cri.go:89] found id: ""
	I0120 16:34:18.643831 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.643843 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:18.643851 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:18.643929 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:18.682752 2184738 cri.go:89] found id: ""
	I0120 16:34:18.682785 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.682794 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:18.682802 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:18.682860 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:18.718455 2184738 cri.go:89] found id: ""
	I0120 16:34:18.718490 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.718500 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:18.718506 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:18.718577 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:18.757457 2184738 cri.go:89] found id: ""
	I0120 16:34:18.757487 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.757496 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:18.757502 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:18.757558 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:18.797757 2184738 cri.go:89] found id: ""
	I0120 16:34:18.797785 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.797793 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:18.797799 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:18.797853 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:18.835738 2184738 cri.go:89] found id: ""
	I0120 16:34:18.835778 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.835788 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:18.835795 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:18.835856 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:18.875224 2184738 cri.go:89] found id: ""
	I0120 16:34:18.875257 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.875266 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:18.875272 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:18.875333 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:18.911679 2184738 cri.go:89] found id: ""
	I0120 16:34:18.911720 2184738 logs.go:282] 0 containers: []
	W0120 16:34:18.911732 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:18.911748 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:18.911766 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:18.962370 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:18.962410 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:18.976691 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:18.976719 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:19.055621 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:19.055648 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:19.055661 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:19.137775 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:19.137814 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:21.681893 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:21.700252 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:21.700338 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:21.751022 2184738 cri.go:89] found id: ""
	I0120 16:34:21.751059 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.751072 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:21.751080 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:21.751161 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:21.803117 2184738 cri.go:89] found id: ""
	I0120 16:34:21.803156 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.803168 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:21.803176 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:21.803233 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:21.852827 2184738 cri.go:89] found id: ""
	I0120 16:34:21.852868 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.852879 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:21.852888 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:21.852960 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:21.892608 2184738 cri.go:89] found id: ""
	I0120 16:34:21.892637 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.892645 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:21.892652 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:21.892705 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:21.931852 2184738 cri.go:89] found id: ""
	I0120 16:34:21.931891 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.931903 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:21.931911 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:21.931980 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:21.969968 2184738 cri.go:89] found id: ""
	I0120 16:34:21.970011 2184738 logs.go:282] 0 containers: []
	W0120 16:34:21.970023 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:21.970033 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:21.970107 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:22.007515 2184738 cri.go:89] found id: ""
	I0120 16:34:22.007558 2184738 logs.go:282] 0 containers: []
	W0120 16:34:22.007580 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:22.007590 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:22.007669 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:22.046779 2184738 cri.go:89] found id: ""
	I0120 16:34:22.046814 2184738 logs.go:282] 0 containers: []
	W0120 16:34:22.046825 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:22.046838 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:22.046854 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:22.100023 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:22.100076 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:22.115708 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:22.115767 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:22.194087 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:22.194114 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:22.194128 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:22.282033 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:22.282083 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:24.825133 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:24.839208 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:24.839303 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:24.880549 2184738 cri.go:89] found id: ""
	I0120 16:34:24.880604 2184738 logs.go:282] 0 containers: []
	W0120 16:34:24.880628 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:24.880636 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:24.880704 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:24.917435 2184738 cri.go:89] found id: ""
	I0120 16:34:24.917470 2184738 logs.go:282] 0 containers: []
	W0120 16:34:24.917481 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:24.917488 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:24.917569 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:24.957535 2184738 cri.go:89] found id: ""
	I0120 16:34:24.957566 2184738 logs.go:282] 0 containers: []
	W0120 16:34:24.957575 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:24.957581 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:24.957665 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:24.994996 2184738 cri.go:89] found id: ""
	I0120 16:34:24.995028 2184738 logs.go:282] 0 containers: []
	W0120 16:34:24.995038 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:24.995062 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:24.995134 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:25.032569 2184738 cri.go:89] found id: ""
	I0120 16:34:25.032601 2184738 logs.go:282] 0 containers: []
	W0120 16:34:25.032613 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:25.032628 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:25.032698 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:25.068425 2184738 cri.go:89] found id: ""
	I0120 16:34:25.068458 2184738 logs.go:282] 0 containers: []
	W0120 16:34:25.068470 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:25.068478 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:25.068549 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:25.107697 2184738 cri.go:89] found id: ""
	I0120 16:34:25.107725 2184738 logs.go:282] 0 containers: []
	W0120 16:34:25.107734 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:25.107740 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:25.107794 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:25.143685 2184738 cri.go:89] found id: ""
	I0120 16:34:25.143748 2184738 logs.go:282] 0 containers: []
	W0120 16:34:25.143761 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:25.143774 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:25.143792 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:25.198060 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:25.198113 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:25.213577 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:25.213615 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:25.293738 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:25.293773 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:25.293792 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:25.370238 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:25.370288 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:27.915284 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:27.929236 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:27.929329 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:27.965898 2184738 cri.go:89] found id: ""
	I0120 16:34:27.965936 2184738 logs.go:282] 0 containers: []
	W0120 16:34:27.965947 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:27.965953 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:27.966017 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:28.003277 2184738 cri.go:89] found id: ""
	I0120 16:34:28.003308 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.003325 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:28.003333 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:28.003398 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:28.044270 2184738 cri.go:89] found id: ""
	I0120 16:34:28.044318 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.044328 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:28.044335 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:28.044391 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:28.081063 2184738 cri.go:89] found id: ""
	I0120 16:34:28.081097 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.081107 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:28.081114 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:28.081182 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:28.116563 2184738 cri.go:89] found id: ""
	I0120 16:34:28.116604 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.116616 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:28.116625 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:28.116703 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:28.161254 2184738 cri.go:89] found id: ""
	I0120 16:34:28.161304 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.161318 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:28.161327 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:28.161400 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:28.205091 2184738 cri.go:89] found id: ""
	I0120 16:34:28.205122 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.205132 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:28.205138 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:28.205193 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:28.252721 2184738 cri.go:89] found id: ""
	I0120 16:34:28.252752 2184738 logs.go:282] 0 containers: []
	W0120 16:34:28.252760 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:28.252776 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:28.252791 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:28.297920 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:28.297959 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:28.350586 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:28.350655 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:28.365465 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:28.365495 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:28.446708 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:28.446772 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:28.446796 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:31.022749 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:31.037037 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:31.037124 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:31.074377 2184738 cri.go:89] found id: ""
	I0120 16:34:31.074411 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.074430 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:31.074439 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:31.074499 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:31.116915 2184738 cri.go:89] found id: ""
	I0120 16:34:31.116945 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.116954 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:31.116960 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:31.117029 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:31.155801 2184738 cri.go:89] found id: ""
	I0120 16:34:31.155834 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.155843 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:31.155850 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:31.155921 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:31.193413 2184738 cri.go:89] found id: ""
	I0120 16:34:31.193447 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.193460 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:31.193469 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:31.193543 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:31.231680 2184738 cri.go:89] found id: ""
	I0120 16:34:31.231709 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.231721 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:31.231730 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:31.231792 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:31.273269 2184738 cri.go:89] found id: ""
	I0120 16:34:31.273314 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.273326 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:31.273335 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:31.273401 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:31.317446 2184738 cri.go:89] found id: ""
	I0120 16:34:31.317475 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.317484 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:31.317491 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:31.317555 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:31.353794 2184738 cri.go:89] found id: ""
	I0120 16:34:31.353829 2184738 logs.go:282] 0 containers: []
	W0120 16:34:31.353841 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:31.353855 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:31.353872 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:31.440834 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:31.440868 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:31.440886 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:31.519640 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:31.519688 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:31.564413 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:31.564456 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:31.618716 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:31.618759 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:34.134769 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:34.150175 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:34.150267 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:34.186986 2184738 cri.go:89] found id: ""
	I0120 16:34:34.187027 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.187039 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:34.187048 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:34.187115 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:34.227727 2184738 cri.go:89] found id: ""
	I0120 16:34:34.227755 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.227766 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:34.227775 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:34.227839 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:34.269318 2184738 cri.go:89] found id: ""
	I0120 16:34:34.269353 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.269362 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:34.269369 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:34.269427 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:34.315662 2184738 cri.go:89] found id: ""
	I0120 16:34:34.315707 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.315720 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:34.315728 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:34.315799 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:34.350390 2184738 cri.go:89] found id: ""
	I0120 16:34:34.350431 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.350443 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:34.350451 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:34.350530 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:34.389072 2184738 cri.go:89] found id: ""
	I0120 16:34:34.389114 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.389124 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:34.389131 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:34.389200 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:34.426543 2184738 cri.go:89] found id: ""
	I0120 16:34:34.426581 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.426593 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:34.426618 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:34.426701 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:34.464923 2184738 cri.go:89] found id: ""
	I0120 16:34:34.464953 2184738 logs.go:282] 0 containers: []
	W0120 16:34:34.464963 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:34.464974 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:34.464987 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:34.520130 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:34.520178 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:34.534728 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:34.534775 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:34.609632 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:34.609668 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:34.609688 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:34.692408 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:34.692458 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:37.231657 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:37.247111 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:37.247184 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:37.282656 2184738 cri.go:89] found id: ""
	I0120 16:34:37.282688 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.282700 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:37.282709 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:37.282779 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:37.319612 2184738 cri.go:89] found id: ""
	I0120 16:34:37.319718 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.319743 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:37.319754 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:37.319830 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:37.357203 2184738 cri.go:89] found id: ""
	I0120 16:34:37.357231 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.357239 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:37.357246 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:37.357302 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:37.395324 2184738 cri.go:89] found id: ""
	I0120 16:34:37.395361 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.395374 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:37.395382 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:37.395440 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:37.431962 2184738 cri.go:89] found id: ""
	I0120 16:34:37.431998 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.432010 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:37.432018 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:37.432088 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:37.469515 2184738 cri.go:89] found id: ""
	I0120 16:34:37.469545 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.469553 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:37.469559 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:37.469615 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:37.507928 2184738 cri.go:89] found id: ""
	I0120 16:34:37.507965 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.507974 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:37.507980 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:37.508049 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:37.546845 2184738 cri.go:89] found id: ""
	I0120 16:34:37.546881 2184738 logs.go:282] 0 containers: []
	W0120 16:34:37.546894 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:37.546907 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:37.546922 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:37.599388 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:37.599434 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:37.613380 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:37.613418 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:37.693099 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:37.693122 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:37.693140 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:37.775128 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:37.775177 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:40.318092 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:40.332139 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:40.332208 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:40.371256 2184738 cri.go:89] found id: ""
	I0120 16:34:40.371304 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.371316 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:40.371326 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:40.371409 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:40.408463 2184738 cri.go:89] found id: ""
	I0120 16:34:40.408493 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.408502 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:40.408508 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:40.408562 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:40.446477 2184738 cri.go:89] found id: ""
	I0120 16:34:40.446524 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.446537 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:40.446546 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:40.446647 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:40.486237 2184738 cri.go:89] found id: ""
	I0120 16:34:40.486268 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.486278 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:40.486286 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:40.486362 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:40.533073 2184738 cri.go:89] found id: ""
	I0120 16:34:40.533121 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.533134 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:40.533143 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:40.533228 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:40.574670 2184738 cri.go:89] found id: ""
	I0120 16:34:40.574732 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.574744 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:40.574754 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:40.574826 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:40.612193 2184738 cri.go:89] found id: ""
	I0120 16:34:40.612224 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.612231 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:40.612246 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:40.612303 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:40.651606 2184738 cri.go:89] found id: ""
	I0120 16:34:40.651639 2184738 logs.go:282] 0 containers: []
	W0120 16:34:40.651648 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:40.651658 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:40.651670 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:40.723519 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:40.723542 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:40.723554 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:40.804284 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:40.804329 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:40.848135 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:40.848164 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:40.903038 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:40.903085 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:43.418781 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:43.434091 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:43.434185 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:43.476570 2184738 cri.go:89] found id: ""
	I0120 16:34:43.476604 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.476614 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:43.476622 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:43.476693 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:43.516865 2184738 cri.go:89] found id: ""
	I0120 16:34:43.516896 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.516908 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:43.516916 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:43.516988 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:43.554497 2184738 cri.go:89] found id: ""
	I0120 16:34:43.554532 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.554544 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:43.554561 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:43.554647 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:43.596198 2184738 cri.go:89] found id: ""
	I0120 16:34:43.596233 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.596246 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:43.596254 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:43.596343 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:43.642472 2184738 cri.go:89] found id: ""
	I0120 16:34:43.642505 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.642517 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:43.642525 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:43.642597 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:43.685472 2184738 cri.go:89] found id: ""
	I0120 16:34:43.685509 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.685520 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:43.685529 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:43.685604 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:43.724328 2184738 cri.go:89] found id: ""
	I0120 16:34:43.724361 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.724371 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:43.724377 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:43.724442 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:43.770989 2184738 cri.go:89] found id: ""
	I0120 16:34:43.771019 2184738 logs.go:282] 0 containers: []
	W0120 16:34:43.771037 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:43.771051 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:43.771068 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:43.788859 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:43.788897 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:43.867575 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:43.867606 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:43.867624 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:43.946717 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:43.946765 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:43.988691 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:43.988724 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:46.545393 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:46.561205 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:46.561275 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:46.602195 2184738 cri.go:89] found id: ""
	I0120 16:34:46.602226 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.602243 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:46.602249 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:46.602334 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:46.641537 2184738 cri.go:89] found id: ""
	I0120 16:34:46.641574 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.641583 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:46.641594 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:46.641647 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:46.684452 2184738 cri.go:89] found id: ""
	I0120 16:34:46.684480 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.684489 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:46.684495 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:46.684552 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:46.724477 2184738 cri.go:89] found id: ""
	I0120 16:34:46.724514 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.724527 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:46.724535 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:46.724615 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:46.762549 2184738 cri.go:89] found id: ""
	I0120 16:34:46.762578 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.762590 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:46.762598 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:46.762680 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:46.805506 2184738 cri.go:89] found id: ""
	I0120 16:34:46.805544 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.805556 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:46.805564 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:46.805635 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:46.842930 2184738 cri.go:89] found id: ""
	I0120 16:34:46.842970 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.842979 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:46.842985 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:46.843079 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:46.880605 2184738 cri.go:89] found id: ""
	I0120 16:34:46.880644 2184738 logs.go:282] 0 containers: []
	W0120 16:34:46.880657 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:46.880670 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:46.880688 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:46.895277 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:46.895312 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:46.975812 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:46.975839 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:46.975855 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:47.055343 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:47.055389 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:47.101110 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:47.101147 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:49.656220 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:49.672472 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:49.672539 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:49.706474 2184738 cri.go:89] found id: ""
	I0120 16:34:49.706523 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.706536 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:49.706544 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:49.706640 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:49.745363 2184738 cri.go:89] found id: ""
	I0120 16:34:49.745393 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.745403 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:49.745413 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:49.745498 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:49.780617 2184738 cri.go:89] found id: ""
	I0120 16:34:49.780657 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.780670 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:49.780679 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:49.780763 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:49.815797 2184738 cri.go:89] found id: ""
	I0120 16:34:49.815829 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.815840 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:49.815849 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:49.815922 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:49.858527 2184738 cri.go:89] found id: ""
	I0120 16:34:49.858563 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.858575 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:49.858584 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:49.858672 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:49.919991 2184738 cri.go:89] found id: ""
	I0120 16:34:49.920032 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.920041 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:49.920047 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:49.920102 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:49.961325 2184738 cri.go:89] found id: ""
	I0120 16:34:49.961370 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.961381 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:49.961390 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:49.961451 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:49.999618 2184738 cri.go:89] found id: ""
	I0120 16:34:49.999654 2184738 logs.go:282] 0 containers: []
	W0120 16:34:49.999666 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:49.999681 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:49.999698 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:50.048614 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:50.048653 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:50.063573 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:50.063608 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:50.149239 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:50.149270 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:50.149287 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:50.225075 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:50.225123 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:52.772941 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:52.791784 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:52.791860 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:52.832889 2184738 cri.go:89] found id: ""
	I0120 16:34:52.832919 2184738 logs.go:282] 0 containers: []
	W0120 16:34:52.832928 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:52.832934 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:52.832995 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:52.875055 2184738 cri.go:89] found id: ""
	I0120 16:34:52.875088 2184738 logs.go:282] 0 containers: []
	W0120 16:34:52.875098 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:52.875104 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:52.875175 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:52.921123 2184738 cri.go:89] found id: ""
	I0120 16:34:52.921157 2184738 logs.go:282] 0 containers: []
	W0120 16:34:52.921169 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:52.921179 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:52.921245 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:52.967594 2184738 cri.go:89] found id: ""
	I0120 16:34:52.967629 2184738 logs.go:282] 0 containers: []
	W0120 16:34:52.967640 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:52.967649 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:52.967728 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:53.008693 2184738 cri.go:89] found id: ""
	I0120 16:34:53.008732 2184738 logs.go:282] 0 containers: []
	W0120 16:34:53.008744 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:53.008753 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:53.008834 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:53.050886 2184738 cri.go:89] found id: ""
	I0120 16:34:53.050924 2184738 logs.go:282] 0 containers: []
	W0120 16:34:53.050938 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:53.050950 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:53.051056 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:53.098826 2184738 cri.go:89] found id: ""
	I0120 16:34:53.098861 2184738 logs.go:282] 0 containers: []
	W0120 16:34:53.098872 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:53.098882 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:53.098952 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:53.150178 2184738 cri.go:89] found id: ""
	I0120 16:34:53.150211 2184738 logs.go:282] 0 containers: []
	W0120 16:34:53.150223 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:53.150237 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:53.150256 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:53.217980 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:53.218047 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:53.237190 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:53.237223 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:53.321047 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:53.321074 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:53.321087 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:53.406551 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:53.406590 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:55.956711 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:55.977332 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:55.977397 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:56.030596 2184738 cri.go:89] found id: ""
	I0120 16:34:56.030648 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.030659 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:56.030667 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:56.030727 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:56.074160 2184738 cri.go:89] found id: ""
	I0120 16:34:56.074197 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.074208 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:56.074214 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:56.074321 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:56.113140 2184738 cri.go:89] found id: ""
	I0120 16:34:56.113182 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.113195 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:56.113205 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:56.113269 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:56.149350 2184738 cri.go:89] found id: ""
	I0120 16:34:56.149395 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.149406 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:56.149415 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:56.149492 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:56.190747 2184738 cri.go:89] found id: ""
	I0120 16:34:56.190799 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.190812 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:56.190822 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:56.190890 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:56.231779 2184738 cri.go:89] found id: ""
	I0120 16:34:56.231809 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.231819 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:56.231827 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:56.231893 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:56.273063 2184738 cri.go:89] found id: ""
	I0120 16:34:56.273095 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.273105 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:56.273114 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:56.273185 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:56.314199 2184738 cri.go:89] found id: ""
	I0120 16:34:56.314235 2184738 logs.go:282] 0 containers: []
	W0120 16:34:56.314247 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:56.314260 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:56.314288 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:56.397666 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:56.397735 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:34:56.442222 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:56.442314 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:56.496936 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:56.497008 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:56.528832 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:56.528873 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:56.633757 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:59.135504 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:34:59.150368 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:34:59.150441 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:34:59.189232 2184738 cri.go:89] found id: ""
	I0120 16:34:59.189273 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.189282 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:34:59.189291 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:34:59.189354 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:34:59.228718 2184738 cri.go:89] found id: ""
	I0120 16:34:59.228752 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.228760 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:34:59.228766 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:34:59.228822 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:34:59.268140 2184738 cri.go:89] found id: ""
	I0120 16:34:59.268175 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.268184 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:34:59.268191 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:34:59.268258 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:34:59.304914 2184738 cri.go:89] found id: ""
	I0120 16:34:59.304956 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.304969 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:34:59.304978 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:34:59.305090 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:34:59.344117 2184738 cri.go:89] found id: ""
	I0120 16:34:59.344149 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.344157 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:34:59.344164 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:34:59.344227 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:34:59.381084 2184738 cri.go:89] found id: ""
	I0120 16:34:59.381118 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.381127 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:34:59.381134 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:34:59.381190 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:34:59.417064 2184738 cri.go:89] found id: ""
	I0120 16:34:59.417100 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.417113 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:34:59.417121 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:34:59.417186 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:34:59.454177 2184738 cri.go:89] found id: ""
	I0120 16:34:59.454212 2184738 logs.go:282] 0 containers: []
	W0120 16:34:59.454223 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:34:59.454236 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:34:59.454250 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:34:59.508543 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:34:59.508603 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:34:59.524229 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:34:59.524271 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:34:59.601536 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:34:59.601569 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:34:59.601585 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:34:59.684843 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:34:59.684884 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:02.227457 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:02.245889 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:02.245967 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:02.302576 2184738 cri.go:89] found id: ""
	I0120 16:35:02.302641 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.302655 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:02.302663 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:02.302747 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:02.357403 2184738 cri.go:89] found id: ""
	I0120 16:35:02.357442 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.357456 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:02.357465 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:02.357535 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:02.414134 2184738 cri.go:89] found id: ""
	I0120 16:35:02.414174 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.414187 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:02.414196 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:02.414267 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:02.455771 2184738 cri.go:89] found id: ""
	I0120 16:35:02.455802 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.455814 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:02.455823 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:02.455893 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:02.497527 2184738 cri.go:89] found id: ""
	I0120 16:35:02.497564 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.497574 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:02.497580 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:02.497656 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:02.548108 2184738 cri.go:89] found id: ""
	I0120 16:35:02.548155 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.548169 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:02.548178 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:02.548250 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:02.589316 2184738 cri.go:89] found id: ""
	I0120 16:35:02.589348 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.589358 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:02.589364 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:02.589422 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:02.633499 2184738 cri.go:89] found id: ""
	I0120 16:35:02.633537 2184738 logs.go:282] 0 containers: []
	W0120 16:35:02.633550 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:02.633563 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:02.633585 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:02.650717 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:02.650754 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:02.736441 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:02.736471 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:02.736488 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:02.817423 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:02.817473 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:02.861756 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:02.861805 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:05.418768 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:05.436145 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:05.436232 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:05.482299 2184738 cri.go:89] found id: ""
	I0120 16:35:05.482332 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.482343 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:05.482356 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:05.482421 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:05.530540 2184738 cri.go:89] found id: ""
	I0120 16:35:05.530577 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.530590 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:05.530600 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:05.530676 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:05.574335 2184738 cri.go:89] found id: ""
	I0120 16:35:05.574366 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.574378 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:05.574386 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:05.574458 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:05.620395 2184738 cri.go:89] found id: ""
	I0120 16:35:05.620432 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.620440 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:05.620447 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:05.620512 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:05.665392 2184738 cri.go:89] found id: ""
	I0120 16:35:05.665424 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.665437 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:05.665446 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:05.665518 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:05.705630 2184738 cri.go:89] found id: ""
	I0120 16:35:05.705658 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.705667 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:05.705674 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:05.705739 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:05.745248 2184738 cri.go:89] found id: ""
	I0120 16:35:05.745285 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.745297 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:05.745306 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:05.745377 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:05.787115 2184738 cri.go:89] found id: ""
	I0120 16:35:05.787155 2184738 logs.go:282] 0 containers: []
	W0120 16:35:05.787165 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:05.787176 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:05.787196 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:05.843320 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:05.843370 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:05.857988 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:05.858024 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:05.937543 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:05.937574 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:05.937591 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:06.020766 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:06.020814 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:08.570107 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:08.584385 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:08.584505 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:08.621909 2184738 cri.go:89] found id: ""
	I0120 16:35:08.621940 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.621952 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:08.621960 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:08.622057 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:08.665691 2184738 cri.go:89] found id: ""
	I0120 16:35:08.665729 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.665741 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:08.665750 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:08.665831 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:08.705614 2184738 cri.go:89] found id: ""
	I0120 16:35:08.705641 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.705652 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:08.705661 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:08.705731 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:08.753879 2184738 cri.go:89] found id: ""
	I0120 16:35:08.753914 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.753924 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:08.753931 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:08.753995 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:08.799929 2184738 cri.go:89] found id: ""
	I0120 16:35:08.799962 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.799973 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:08.799981 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:08.800058 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:08.843439 2184738 cri.go:89] found id: ""
	I0120 16:35:08.843473 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.843482 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:08.843489 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:08.843565 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:08.883148 2184738 cri.go:89] found id: ""
	I0120 16:35:08.883181 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.883189 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:08.883195 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:08.883252 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:08.917844 2184738 cri.go:89] found id: ""
	I0120 16:35:08.917885 2184738 logs.go:282] 0 containers: []
	W0120 16:35:08.917898 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:08.917910 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:08.917924 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:08.972931 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:08.972967 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:08.991122 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:08.991162 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:09.073783 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:09.073810 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:09.073823 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:09.152016 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:09.152065 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:11.696357 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:11.710679 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:11.710749 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:11.749517 2184738 cri.go:89] found id: ""
	I0120 16:35:11.749546 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.749555 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:11.749562 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:11.749619 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:11.801981 2184738 cri.go:89] found id: ""
	I0120 16:35:11.802042 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.802056 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:11.802065 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:11.802123 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:11.841892 2184738 cri.go:89] found id: ""
	I0120 16:35:11.841934 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.841948 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:11.841957 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:11.842029 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:11.889446 2184738 cri.go:89] found id: ""
	I0120 16:35:11.889475 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.889487 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:11.889497 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:11.889585 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:11.930595 2184738 cri.go:89] found id: ""
	I0120 16:35:11.930646 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.930657 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:11.930664 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:11.930728 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:11.972618 2184738 cri.go:89] found id: ""
	I0120 16:35:11.972647 2184738 logs.go:282] 0 containers: []
	W0120 16:35:11.972655 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:11.972661 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:11.972723 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:12.013229 2184738 cri.go:89] found id: ""
	I0120 16:35:12.013257 2184738 logs.go:282] 0 containers: []
	W0120 16:35:12.013268 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:12.013278 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:12.013345 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:12.052870 2184738 cri.go:89] found id: ""
	I0120 16:35:12.052898 2184738 logs.go:282] 0 containers: []
	W0120 16:35:12.052906 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:12.052916 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:12.052928 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:12.119115 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:12.119159 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:12.134422 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:12.134462 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:12.209912 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:12.209942 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:12.209956 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:12.292824 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:12.292869 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:14.841553 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:14.857963 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:14.858050 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:14.900949 2184738 cri.go:89] found id: ""
	I0120 16:35:14.900978 2184738 logs.go:282] 0 containers: []
	W0120 16:35:14.900986 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:14.900992 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:14.901060 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:14.945794 2184738 cri.go:89] found id: ""
	I0120 16:35:14.945829 2184738 logs.go:282] 0 containers: []
	W0120 16:35:14.945843 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:14.945852 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:14.945931 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:14.989888 2184738 cri.go:89] found id: ""
	I0120 16:35:14.989926 2184738 logs.go:282] 0 containers: []
	W0120 16:35:14.989939 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:14.989948 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:14.990024 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:15.030759 2184738 cri.go:89] found id: ""
	I0120 16:35:15.030794 2184738 logs.go:282] 0 containers: []
	W0120 16:35:15.030805 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:15.030813 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:15.030885 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:15.076631 2184738 cri.go:89] found id: ""
	I0120 16:35:15.076663 2184738 logs.go:282] 0 containers: []
	W0120 16:35:15.076673 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:15.076680 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:15.076750 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:15.116477 2184738 cri.go:89] found id: ""
	I0120 16:35:15.116517 2184738 logs.go:282] 0 containers: []
	W0120 16:35:15.116530 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:15.116540 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:15.116632 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:15.154388 2184738 cri.go:89] found id: ""
	I0120 16:35:15.154427 2184738 logs.go:282] 0 containers: []
	W0120 16:35:15.154441 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:15.154450 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:15.154519 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:15.193663 2184738 cri.go:89] found id: ""
	I0120 16:35:15.193699 2184738 logs.go:282] 0 containers: []
	W0120 16:35:15.193711 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:15.193725 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:15.193745 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:15.246837 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:15.246892 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:15.265953 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:15.266003 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:15.348642 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:15.348662 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:15.348676 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:15.443423 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:15.443472 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:17.993682 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:18.011829 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:18.011902 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:18.055928 2184738 cri.go:89] found id: ""
	I0120 16:35:18.055954 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.055966 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:18.055973 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:18.056025 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:18.099430 2184738 cri.go:89] found id: ""
	I0120 16:35:18.099468 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.099479 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:18.099500 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:18.099567 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:18.144577 2184738 cri.go:89] found id: ""
	I0120 16:35:18.144611 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.144619 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:18.144625 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:18.144694 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:18.190907 2184738 cri.go:89] found id: ""
	I0120 16:35:18.190937 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.190946 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:18.190952 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:18.191035 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:18.231623 2184738 cri.go:89] found id: ""
	I0120 16:35:18.231656 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.231667 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:18.231674 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:18.231748 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:18.274481 2184738 cri.go:89] found id: ""
	I0120 16:35:18.274515 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.274527 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:18.274535 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:18.274621 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:18.314060 2184738 cri.go:89] found id: ""
	I0120 16:35:18.314089 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.314105 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:18.314112 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:18.314175 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:18.362804 2184738 cri.go:89] found id: ""
	I0120 16:35:18.362829 2184738 logs.go:282] 0 containers: []
	W0120 16:35:18.362839 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:18.362852 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:18.362868 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:18.380850 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:18.380904 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:18.466486 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:18.466509 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:18.466527 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:18.567732 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:18.567768 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:18.617221 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:18.617265 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:21.170907 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:21.188908 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:21.188997 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:21.235079 2184738 cri.go:89] found id: ""
	I0120 16:35:21.235116 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.235128 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:21.235138 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:21.235208 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:21.273142 2184738 cri.go:89] found id: ""
	I0120 16:35:21.273177 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.273188 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:21.273196 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:21.273262 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:21.310098 2184738 cri.go:89] found id: ""
	I0120 16:35:21.310140 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.310152 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:21.310160 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:21.310221 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:21.347911 2184738 cri.go:89] found id: ""
	I0120 16:35:21.347952 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.347962 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:21.347972 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:21.348042 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:21.391793 2184738 cri.go:89] found id: ""
	I0120 16:35:21.391834 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.391847 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:21.391864 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:21.391934 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:21.427868 2184738 cri.go:89] found id: ""
	I0120 16:35:21.427901 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.427912 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:21.427922 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:21.428018 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:21.464330 2184738 cri.go:89] found id: ""
	I0120 16:35:21.464365 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.464377 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:21.464385 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:21.464459 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:21.499464 2184738 cri.go:89] found id: ""
	I0120 16:35:21.499507 2184738 logs.go:282] 0 containers: []
	W0120 16:35:21.499520 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:21.499535 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:21.499556 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:21.514254 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:21.514291 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:21.589392 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:21.589413 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:21.589426 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:21.665783 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:21.665840 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:21.709732 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:21.709776 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:24.276885 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:24.295422 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:24.295514 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:24.338056 2184738 cri.go:89] found id: ""
	I0120 16:35:24.338107 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.338118 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:24.338127 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:24.338210 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:24.375652 2184738 cri.go:89] found id: ""
	I0120 16:35:24.375691 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.375703 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:24.375712 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:24.375784 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:24.421640 2184738 cri.go:89] found id: ""
	I0120 16:35:24.421688 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.421702 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:24.421712 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:24.421785 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:24.464452 2184738 cri.go:89] found id: ""
	I0120 16:35:24.464493 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.464506 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:24.464515 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:24.464583 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:24.503132 2184738 cri.go:89] found id: ""
	I0120 16:35:24.503170 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.503182 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:24.503191 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:24.503259 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:24.550037 2184738 cri.go:89] found id: ""
	I0120 16:35:24.550078 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.550091 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:24.550100 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:24.550194 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:24.588029 2184738 cri.go:89] found id: ""
	I0120 16:35:24.588066 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.588078 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:24.588087 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:24.588152 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:24.622070 2184738 cri.go:89] found id: ""
	I0120 16:35:24.622108 2184738 logs.go:282] 0 containers: []
	W0120 16:35:24.622120 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:24.622133 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:24.622150 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:24.636706 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:24.636760 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:24.703949 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:24.703986 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:24.704005 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:24.786277 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:24.786324 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:24.829772 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:24.829808 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:27.387709 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:27.402690 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:27.402770 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:27.439067 2184738 cri.go:89] found id: ""
	I0120 16:35:27.439100 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.439119 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:27.439131 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:27.439199 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:27.473248 2184738 cri.go:89] found id: ""
	I0120 16:35:27.473280 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.473290 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:27.473298 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:27.473354 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:27.513345 2184738 cri.go:89] found id: ""
	I0120 16:35:27.513384 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.513397 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:27.513406 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:27.513483 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:27.551085 2184738 cri.go:89] found id: ""
	I0120 16:35:27.551121 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.551130 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:27.551137 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:27.551206 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:27.586407 2184738 cri.go:89] found id: ""
	I0120 16:35:27.586444 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.586458 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:27.586464 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:27.586524 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:27.624889 2184738 cri.go:89] found id: ""
	I0120 16:35:27.624931 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.624945 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:27.624955 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:27.625031 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:27.658477 2184738 cri.go:89] found id: ""
	I0120 16:35:27.658505 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.658515 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:27.658521 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:27.658576 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:27.697145 2184738 cri.go:89] found id: ""
	I0120 16:35:27.697184 2184738 logs.go:282] 0 containers: []
	W0120 16:35:27.697196 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:27.697210 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:27.697229 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:27.751395 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:27.751442 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:27.768514 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:27.768579 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:27.848412 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:27.848443 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:27.848462 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:27.933406 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:27.933455 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:30.484912 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:30.499967 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:30.500049 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:30.541009 2184738 cri.go:89] found id: ""
	I0120 16:35:30.541051 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.541067 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:30.541076 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:30.541153 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:30.586081 2184738 cri.go:89] found id: ""
	I0120 16:35:30.586119 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.586128 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:30.586135 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:30.586202 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:30.627343 2184738 cri.go:89] found id: ""
	I0120 16:35:30.627373 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.627385 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:30.627393 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:30.627465 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:30.683621 2184738 cri.go:89] found id: ""
	I0120 16:35:30.683666 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.683680 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:30.683689 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:30.683752 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:30.721217 2184738 cri.go:89] found id: ""
	I0120 16:35:30.721265 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.721292 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:30.721301 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:30.721378 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:30.763631 2184738 cri.go:89] found id: ""
	I0120 16:35:30.763664 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.763676 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:30.763685 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:30.763755 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:30.802042 2184738 cri.go:89] found id: ""
	I0120 16:35:30.802068 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.802086 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:30.802093 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:30.802156 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:30.849713 2184738 cri.go:89] found id: ""
	I0120 16:35:30.849743 2184738 logs.go:282] 0 containers: []
	W0120 16:35:30.849752 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:30.849764 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:30.849780 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:30.934355 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:30.934391 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:30.934409 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:31.013878 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:31.013925 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:31.062103 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:31.062145 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:31.113900 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:31.113950 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:33.632132 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:33.647777 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:33.647856 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:33.695156 2184738 cri.go:89] found id: ""
	I0120 16:35:33.695196 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.695207 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:33.695214 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:33.695309 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:33.730825 2184738 cri.go:89] found id: ""
	I0120 16:35:33.730862 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.730872 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:33.730879 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:33.730943 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:33.767195 2184738 cri.go:89] found id: ""
	I0120 16:35:33.767238 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.767250 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:33.767259 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:33.767326 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:33.803265 2184738 cri.go:89] found id: ""
	I0120 16:35:33.803308 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.803320 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:33.803330 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:33.803404 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:33.838841 2184738 cri.go:89] found id: ""
	I0120 16:35:33.838871 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.838879 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:33.838889 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:33.838948 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:33.876238 2184738 cri.go:89] found id: ""
	I0120 16:35:33.876274 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.876286 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:33.876295 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:33.876365 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:33.915229 2184738 cri.go:89] found id: ""
	I0120 16:35:33.915266 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.915278 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:33.915286 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:33.915356 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:33.951355 2184738 cri.go:89] found id: ""
	I0120 16:35:33.951386 2184738 logs.go:282] 0 containers: []
	W0120 16:35:33.951395 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:33.951406 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:33.951419 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:34.001545 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:34.001585 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:34.016113 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:34.016152 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:34.089800 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:34.089831 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:34.089850 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:34.174487 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:34.174537 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:36.717367 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:36.732369 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:36.732451 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:36.775705 2184738 cri.go:89] found id: ""
	I0120 16:35:36.775736 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.775756 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:36.775765 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:36.775832 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:36.820059 2184738 cri.go:89] found id: ""
	I0120 16:35:36.820090 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.820098 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:36.820105 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:36.820161 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:36.863916 2184738 cri.go:89] found id: ""
	I0120 16:35:36.863957 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.863970 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:36.863979 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:36.864051 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:36.904983 2184738 cri.go:89] found id: ""
	I0120 16:35:36.905025 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.905038 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:36.905048 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:36.905135 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:36.948107 2184738 cri.go:89] found id: ""
	I0120 16:35:36.948142 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.948155 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:36.948163 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:36.948231 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:36.995524 2184738 cri.go:89] found id: ""
	I0120 16:35:36.995558 2184738 logs.go:282] 0 containers: []
	W0120 16:35:36.995569 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:36.995578 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:36.995649 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:37.045454 2184738 cri.go:89] found id: ""
	I0120 16:35:37.045490 2184738 logs.go:282] 0 containers: []
	W0120 16:35:37.045502 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:37.045512 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:37.045584 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:37.083133 2184738 cri.go:89] found id: ""
	I0120 16:35:37.083170 2184738 logs.go:282] 0 containers: []
	W0120 16:35:37.083182 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:37.083197 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:37.083214 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:37.136122 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:37.136181 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:37.150857 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:37.150889 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:37.227412 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:37.227507 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:37.227535 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:37.316240 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:37.316289 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:39.874787 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:39.890773 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:39.890873 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:39.936452 2184738 cri.go:89] found id: ""
	I0120 16:35:39.936492 2184738 logs.go:282] 0 containers: []
	W0120 16:35:39.936504 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:39.936512 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:39.936576 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:39.992424 2184738 cri.go:89] found id: ""
	I0120 16:35:39.992459 2184738 logs.go:282] 0 containers: []
	W0120 16:35:39.992471 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:39.992480 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:39.992539 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:40.049132 2184738 cri.go:89] found id: ""
	I0120 16:35:40.049159 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.049167 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:40.049173 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:40.049222 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:40.097331 2184738 cri.go:89] found id: ""
	I0120 16:35:40.097367 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.097380 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:40.097389 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:40.097456 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:40.144144 2184738 cri.go:89] found id: ""
	I0120 16:35:40.144182 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.144194 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:40.144202 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:40.144271 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:40.192544 2184738 cri.go:89] found id: ""
	I0120 16:35:40.192573 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.192585 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:40.192594 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:40.192664 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:40.246671 2184738 cri.go:89] found id: ""
	I0120 16:35:40.246709 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.246723 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:40.246733 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:40.246809 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:40.284366 2184738 cri.go:89] found id: ""
	I0120 16:35:40.284400 2184738 logs.go:282] 0 containers: []
	W0120 16:35:40.284412 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:40.284426 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:40.284445 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:40.361733 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:40.361764 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:40.361791 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:40.475109 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:40.475168 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:40.524900 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:40.524949 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:40.596703 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:40.596773 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:43.115699 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:43.139140 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:43.139228 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:43.212500 2184738 cri.go:89] found id: ""
	I0120 16:35:43.212535 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.212546 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:43.212554 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:43.212623 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:43.316414 2184738 cri.go:89] found id: ""
	I0120 16:35:43.316446 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.316459 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:43.316473 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:43.316549 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:43.376030 2184738 cri.go:89] found id: ""
	I0120 16:35:43.376063 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.376079 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:43.376087 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:43.376157 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:43.429391 2184738 cri.go:89] found id: ""
	I0120 16:35:43.429422 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.429434 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:43.429441 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:43.429527 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:43.481644 2184738 cri.go:89] found id: ""
	I0120 16:35:43.481682 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.481695 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:43.481703 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:43.481775 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:43.529644 2184738 cri.go:89] found id: ""
	I0120 16:35:43.529675 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.529686 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:43.529695 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:43.529776 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:43.601832 2184738 cri.go:89] found id: ""
	I0120 16:35:43.601868 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.601880 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:43.601889 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:43.601957 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:43.663061 2184738 cri.go:89] found id: ""
	I0120 16:35:43.663099 2184738 logs.go:282] 0 containers: []
	W0120 16:35:43.663111 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:43.663124 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:43.663141 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:43.729751 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:43.729796 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:43.814907 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:43.814960 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:43.843175 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:43.843217 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:43.960802 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:43.960836 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:43.960857 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:46.585804 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:46.604976 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:46.605059 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:46.644576 2184738 cri.go:89] found id: ""
	I0120 16:35:46.644644 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.644664 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:46.644684 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:46.644765 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:46.689424 2184738 cri.go:89] found id: ""
	I0120 16:35:46.689460 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.689469 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:46.689476 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:46.689547 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:46.729530 2184738 cri.go:89] found id: ""
	I0120 16:35:46.729560 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.729569 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:46.729575 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:46.729640 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:46.771282 2184738 cri.go:89] found id: ""
	I0120 16:35:46.771326 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.771338 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:46.771346 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:46.771428 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:46.814286 2184738 cri.go:89] found id: ""
	I0120 16:35:46.814345 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.814359 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:46.814369 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:46.814444 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:46.864026 2184738 cri.go:89] found id: ""
	I0120 16:35:46.864065 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.864077 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:46.864085 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:46.864155 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:46.905700 2184738 cri.go:89] found id: ""
	I0120 16:35:46.905731 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.905742 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:46.905752 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:46.905811 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:46.947717 2184738 cri.go:89] found id: ""
	I0120 16:35:46.947746 2184738 logs.go:282] 0 containers: []
	W0120 16:35:46.947758 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:46.947772 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:46.947788 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:47.040376 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:47.040420 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:47.058931 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:47.058971 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:47.160900 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:47.160929 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:47.160949 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:47.265292 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:47.265351 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:49.818533 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:49.836756 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:49.836839 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:49.878833 2184738 cri.go:89] found id: ""
	I0120 16:35:49.878868 2184738 logs.go:282] 0 containers: []
	W0120 16:35:49.878880 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:49.878889 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:49.878955 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:49.918887 2184738 cri.go:89] found id: ""
	I0120 16:35:49.918930 2184738 logs.go:282] 0 containers: []
	W0120 16:35:49.918942 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:49.918952 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:49.919036 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:49.961493 2184738 cri.go:89] found id: ""
	I0120 16:35:49.961526 2184738 logs.go:282] 0 containers: []
	W0120 16:35:49.961539 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:49.961547 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:49.961631 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:50.002394 2184738 cri.go:89] found id: ""
	I0120 16:35:50.002428 2184738 logs.go:282] 0 containers: []
	W0120 16:35:50.002439 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:50.002448 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:50.002516 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:50.042359 2184738 cri.go:89] found id: ""
	I0120 16:35:50.042396 2184738 logs.go:282] 0 containers: []
	W0120 16:35:50.042410 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:50.042420 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:50.042490 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:50.078831 2184738 cri.go:89] found id: ""
	I0120 16:35:50.078872 2184738 logs.go:282] 0 containers: []
	W0120 16:35:50.078885 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:50.078893 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:50.078963 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:50.120656 2184738 cri.go:89] found id: ""
	I0120 16:35:50.120691 2184738 logs.go:282] 0 containers: []
	W0120 16:35:50.120702 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:50.120712 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:50.120780 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:50.169404 2184738 cri.go:89] found id: ""
	I0120 16:35:50.169435 2184738 logs.go:282] 0 containers: []
	W0120 16:35:50.169447 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:50.169462 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:50.169481 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:50.246949 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:50.247010 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:50.266816 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:50.266856 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:50.375071 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:50.375105 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:50.375124 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:50.480870 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:50.480923 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:53.034821 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:53.057264 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:53.057369 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:53.115452 2184738 cri.go:89] found id: ""
	I0120 16:35:53.115492 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.115505 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:53.115514 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:53.115584 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:53.170372 2184738 cri.go:89] found id: ""
	I0120 16:35:53.170416 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.170430 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:53.170440 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:53.170517 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:53.224176 2184738 cri.go:89] found id: ""
	I0120 16:35:53.224222 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.224235 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:53.224243 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:53.224317 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:53.321567 2184738 cri.go:89] found id: ""
	I0120 16:35:53.321600 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.321608 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:53.321614 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:53.321671 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:53.363140 2184738 cri.go:89] found id: ""
	I0120 16:35:53.363239 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.363266 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:53.363286 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:53.363415 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:53.406071 2184738 cri.go:89] found id: ""
	I0120 16:35:53.406118 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.406130 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:53.406140 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:53.406215 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:53.457791 2184738 cri.go:89] found id: ""
	I0120 16:35:53.457830 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.457841 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:53.457850 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:53.457927 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:53.509084 2184738 cri.go:89] found id: ""
	I0120 16:35:53.509189 2184738 logs.go:282] 0 containers: []
	W0120 16:35:53.509216 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:53.509255 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:53.509302 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:53.592755 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:53.592814 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:53.622566 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:53.622640 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:53.729445 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:53.729478 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:53.729498 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:53.861083 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:53.861149 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:56.420936 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:56.435659 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:56.435745 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:56.478517 2184738 cri.go:89] found id: ""
	I0120 16:35:56.478556 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.478568 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:56.478576 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:56.478667 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:56.528088 2184738 cri.go:89] found id: ""
	I0120 16:35:56.528123 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.528136 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:56.528144 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:56.528217 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:56.566146 2184738 cri.go:89] found id: ""
	I0120 16:35:56.566184 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.566197 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:56.566206 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:56.566297 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:56.602073 2184738 cri.go:89] found id: ""
	I0120 16:35:56.602109 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.602120 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:56.602129 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:56.602198 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:56.637434 2184738 cri.go:89] found id: ""
	I0120 16:35:56.637471 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.637482 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:56.637490 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:56.637565 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:56.678156 2184738 cri.go:89] found id: ""
	I0120 16:35:56.678203 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.678212 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:56.678220 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:56.678296 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:56.712499 2184738 cri.go:89] found id: ""
	I0120 16:35:56.712537 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.712550 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:56.712557 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:56.712629 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:56.749714 2184738 cri.go:89] found id: ""
	I0120 16:35:56.749749 2184738 logs.go:282] 0 containers: []
	W0120 16:35:56.749760 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:56.749772 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:56.749788 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:35:56.812987 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:35:56.813028 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:35:56.829676 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:35:56.829732 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:35:56.912252 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:35:56.912282 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:35:56.912299 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:35:57.004474 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:57.004522 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:59.547303 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:35:59.566202 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:35:59.566288 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:35:59.611754 2184738 cri.go:89] found id: ""
	I0120 16:35:59.611788 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.611800 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:35:59.611819 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:35:59.611889 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:35:59.659814 2184738 cri.go:89] found id: ""
	I0120 16:35:59.659845 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.659855 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:35:59.659863 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:35:59.659917 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:35:59.695804 2184738 cri.go:89] found id: ""
	I0120 16:35:59.695836 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.695847 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:35:59.695855 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:35:59.695925 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:35:59.733156 2184738 cri.go:89] found id: ""
	I0120 16:35:59.733186 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.733198 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:35:59.733206 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:35:59.733276 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:35:59.778559 2184738 cri.go:89] found id: ""
	I0120 16:35:59.778594 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.778616 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:35:59.778624 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:35:59.778693 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:35:59.818806 2184738 cri.go:89] found id: ""
	I0120 16:35:59.818836 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.818845 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:35:59.818851 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:35:59.818972 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:35:59.857395 2184738 cri.go:89] found id: ""
	I0120 16:35:59.857421 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.857430 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:35:59.857436 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:35:59.857494 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:35:59.913183 2184738 cri.go:89] found id: ""
	I0120 16:35:59.913213 2184738 logs.go:282] 0 containers: []
	W0120 16:35:59.913222 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:35:59.913232 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:35:59.913246 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:35:59.960162 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:35:59.960192 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:00.014235 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:00.014290 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:00.028979 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:00.029023 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:00.107454 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:00.107480 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:00.107493 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:02.694993 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:02.710578 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:02.710690 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:02.750295 2184738 cri.go:89] found id: ""
	I0120 16:36:02.750331 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.750342 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:02.750352 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:02.750417 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:02.792149 2184738 cri.go:89] found id: ""
	I0120 16:36:02.792185 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.792196 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:02.792203 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:02.792273 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:02.834815 2184738 cri.go:89] found id: ""
	I0120 16:36:02.834847 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.834857 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:02.834864 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:02.834926 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:02.872368 2184738 cri.go:89] found id: ""
	I0120 16:36:02.872403 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.872415 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:02.872423 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:02.872529 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:02.909507 2184738 cri.go:89] found id: ""
	I0120 16:36:02.909538 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.909546 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:02.909553 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:02.909611 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:02.949436 2184738 cri.go:89] found id: ""
	I0120 16:36:02.949477 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.949489 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:02.949507 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:02.949598 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:02.990181 2184738 cri.go:89] found id: ""
	I0120 16:36:02.990215 2184738 logs.go:282] 0 containers: []
	W0120 16:36:02.990227 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:02.990236 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:02.990304 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:03.030434 2184738 cri.go:89] found id: ""
	I0120 16:36:03.030464 2184738 logs.go:282] 0 containers: []
	W0120 16:36:03.030473 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:03.030484 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:03.030498 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:03.089594 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:03.089638 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:03.105734 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:03.105785 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:03.193497 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:03.193540 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:03.193565 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:03.276766 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:03.276813 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:05.825214 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:05.843173 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:05.843258 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:05.922022 2184738 cri.go:89] found id: ""
	I0120 16:36:05.922058 2184738 logs.go:282] 0 containers: []
	W0120 16:36:05.922070 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:05.922079 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:05.922151 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:05.969782 2184738 cri.go:89] found id: ""
	I0120 16:36:05.969818 2184738 logs.go:282] 0 containers: []
	W0120 16:36:05.969830 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:05.969838 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:05.969929 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:06.012938 2184738 cri.go:89] found id: ""
	I0120 16:36:06.012979 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.012992 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:06.013000 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:06.013073 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:06.060521 2184738 cri.go:89] found id: ""
	I0120 16:36:06.060557 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.060568 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:06.060576 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:06.060647 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:06.113866 2184738 cri.go:89] found id: ""
	I0120 16:36:06.113901 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.113910 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:06.113917 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:06.113981 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:06.155417 2184738 cri.go:89] found id: ""
	I0120 16:36:06.155444 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.155452 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:06.155459 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:06.155550 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:06.205418 2184738 cri.go:89] found id: ""
	I0120 16:36:06.205459 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.205469 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:06.205477 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:06.205543 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:06.260481 2184738 cri.go:89] found id: ""
	I0120 16:36:06.260516 2184738 logs.go:282] 0 containers: []
	W0120 16:36:06.260528 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:06.260543 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:06.260558 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:06.311475 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:06.311519 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:06.326527 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:06.326558 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:06.418239 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:06.418267 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:06.418283 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:06.499417 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:06.499454 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:09.061432 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:09.076835 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:09.076912 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:09.127783 2184738 cri.go:89] found id: ""
	I0120 16:36:09.127824 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.127838 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:09.127848 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:09.127926 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:09.165541 2184738 cri.go:89] found id: ""
	I0120 16:36:09.165573 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.165583 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:09.165589 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:09.165658 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:09.201546 2184738 cri.go:89] found id: ""
	I0120 16:36:09.201578 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.201590 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:09.201599 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:09.201667 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:09.245012 2184738 cri.go:89] found id: ""
	I0120 16:36:09.245050 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.245063 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:09.245072 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:09.245169 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:09.286761 2184738 cri.go:89] found id: ""
	I0120 16:36:09.286785 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.286795 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:09.286806 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:09.286873 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:09.333452 2184738 cri.go:89] found id: ""
	I0120 16:36:09.333489 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.333500 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:09.333510 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:09.333582 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:09.377603 2184738 cri.go:89] found id: ""
	I0120 16:36:09.377634 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.377646 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:09.377655 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:09.377726 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:09.426828 2184738 cri.go:89] found id: ""
	I0120 16:36:09.426856 2184738 logs.go:282] 0 containers: []
	W0120 16:36:09.426865 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:09.426877 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:09.426891 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:09.476219 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:09.476258 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:09.490776 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:09.490813 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:09.572949 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:09.572996 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:09.573015 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:09.661450 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:09.661491 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:12.203194 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:12.218204 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:12.218295 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:12.262335 2184738 cri.go:89] found id: ""
	I0120 16:36:12.262372 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.262384 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:12.262394 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:12.262464 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:12.300985 2184738 cri.go:89] found id: ""
	I0120 16:36:12.301019 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.301038 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:12.301048 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:12.301117 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:12.359239 2184738 cri.go:89] found id: ""
	I0120 16:36:12.359277 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.359288 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:12.359296 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:12.359364 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:12.432047 2184738 cri.go:89] found id: ""
	I0120 16:36:12.432093 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.432113 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:12.432123 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:12.432220 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:12.486989 2184738 cri.go:89] found id: ""
	I0120 16:36:12.487029 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.487042 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:12.487052 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:12.487134 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:12.540400 2184738 cri.go:89] found id: ""
	I0120 16:36:12.540445 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.540465 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:12.540479 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:12.540554 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:12.578402 2184738 cri.go:89] found id: ""
	I0120 16:36:12.578444 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.578458 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:12.578467 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:12.578545 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:12.615088 2184738 cri.go:89] found id: ""
	I0120 16:36:12.615133 2184738 logs.go:282] 0 containers: []
	W0120 16:36:12.615155 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:12.615170 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:12.615188 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:12.692333 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:12.692366 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:12.692389 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:12.789188 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:12.789243 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:12.838577 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:12.838636 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:12.894141 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:12.894184 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:15.412061 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:15.428068 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:15.428177 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:15.469328 2184738 cri.go:89] found id: ""
	I0120 16:36:15.469372 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.469384 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:15.469393 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:15.469464 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:15.517532 2184738 cri.go:89] found id: ""
	I0120 16:36:15.517568 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.517583 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:15.517591 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:15.517664 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:15.559734 2184738 cri.go:89] found id: ""
	I0120 16:36:15.559775 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.559786 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:15.559795 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:15.559885 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:15.599666 2184738 cri.go:89] found id: ""
	I0120 16:36:15.599701 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.599715 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:15.599722 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:15.599787 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:15.643155 2184738 cri.go:89] found id: ""
	I0120 16:36:15.643192 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.643204 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:15.643213 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:15.643280 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:15.684135 2184738 cri.go:89] found id: ""
	I0120 16:36:15.684167 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.684179 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:15.684193 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:15.684260 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:15.728351 2184738 cri.go:89] found id: ""
	I0120 16:36:15.728382 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.728393 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:15.728402 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:15.728459 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:15.770685 2184738 cri.go:89] found id: ""
	I0120 16:36:15.770720 2184738 logs.go:282] 0 containers: []
	W0120 16:36:15.770733 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:15.770750 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:15.770768 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:15.817570 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:15.817602 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:15.873588 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:15.873635 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:15.889470 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:15.889506 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:15.967560 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:15.967587 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:15.967604 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:18.557940 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:18.574306 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:18.574397 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:18.614431 2184738 cri.go:89] found id: ""
	I0120 16:36:18.614465 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.614476 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:18.614485 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:18.614557 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:18.651700 2184738 cri.go:89] found id: ""
	I0120 16:36:18.651735 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.651749 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:18.651757 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:18.651827 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:18.694768 2184738 cri.go:89] found id: ""
	I0120 16:36:18.694802 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.694814 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:18.694823 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:18.694930 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:18.734051 2184738 cri.go:89] found id: ""
	I0120 16:36:18.734082 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.734102 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:18.734111 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:18.734175 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:18.780313 2184738 cri.go:89] found id: ""
	I0120 16:36:18.780360 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.780373 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:18.780391 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:18.780465 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:18.826946 2184738 cri.go:89] found id: ""
	I0120 16:36:18.826985 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.826999 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:18.827008 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:18.827076 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:18.874203 2184738 cri.go:89] found id: ""
	I0120 16:36:18.874237 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.874249 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:18.874257 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:18.874332 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:18.924592 2184738 cri.go:89] found id: ""
	I0120 16:36:18.924631 2184738 logs.go:282] 0 containers: []
	W0120 16:36:18.924644 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:18.924658 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:18.924676 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:18.985424 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:18.985466 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:19.000620 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:19.000650 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:19.085401 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:19.085428 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:19.085447 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:19.166096 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:19.166141 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:21.716621 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:21.731754 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:21.731849 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:21.780561 2184738 cri.go:89] found id: ""
	I0120 16:36:21.780605 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.780619 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:21.780627 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:21.780704 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:21.824874 2184738 cri.go:89] found id: ""
	I0120 16:36:21.824910 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.824921 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:21.824929 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:21.824996 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:21.868704 2184738 cri.go:89] found id: ""
	I0120 16:36:21.868739 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.868752 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:21.868761 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:21.868834 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:21.908597 2184738 cri.go:89] found id: ""
	I0120 16:36:21.908693 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.908709 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:21.908718 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:21.908787 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:21.953901 2184738 cri.go:89] found id: ""
	I0120 16:36:21.953930 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.953938 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:21.953944 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:21.953996 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:21.990667 2184738 cri.go:89] found id: ""
	I0120 16:36:21.990701 2184738 logs.go:282] 0 containers: []
	W0120 16:36:21.990710 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:21.990717 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:21.990777 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:22.033718 2184738 cri.go:89] found id: ""
	I0120 16:36:22.033753 2184738 logs.go:282] 0 containers: []
	W0120 16:36:22.033764 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:22.033773 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:22.033843 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:22.082989 2184738 cri.go:89] found id: ""
	I0120 16:36:22.083023 2184738 logs.go:282] 0 containers: []
	W0120 16:36:22.083033 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:22.083055 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:22.083075 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:22.141313 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:22.141354 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:22.161089 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:22.161130 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:22.247760 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:22.247788 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:22.247803 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:22.332283 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:22.332348 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:24.882870 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:24.897073 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:24.897169 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:24.943064 2184738 cri.go:89] found id: ""
	I0120 16:36:24.943104 2184738 logs.go:282] 0 containers: []
	W0120 16:36:24.943119 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:24.943125 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:24.943185 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:24.986996 2184738 cri.go:89] found id: ""
	I0120 16:36:24.987027 2184738 logs.go:282] 0 containers: []
	W0120 16:36:24.987039 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:24.987049 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:24.987141 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:25.028290 2184738 cri.go:89] found id: ""
	I0120 16:36:25.028330 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.028343 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:25.028352 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:25.028426 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:25.068257 2184738 cri.go:89] found id: ""
	I0120 16:36:25.068296 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.068310 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:25.068319 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:25.068400 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:25.108675 2184738 cri.go:89] found id: ""
	I0120 16:36:25.108725 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.108737 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:25.108746 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:25.108818 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:25.146503 2184738 cri.go:89] found id: ""
	I0120 16:36:25.146533 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.146544 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:25.146553 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:25.146642 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:25.191807 2184738 cri.go:89] found id: ""
	I0120 16:36:25.191852 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.191864 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:25.191883 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:25.191969 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:25.239182 2184738 cri.go:89] found id: ""
	I0120 16:36:25.239230 2184738 logs.go:282] 0 containers: []
	W0120 16:36:25.239244 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:25.239257 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:25.239274 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:25.329291 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:25.329321 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:25.329338 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:25.410265 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:25.410322 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:25.463738 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:25.463783 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:25.522788 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:25.522830 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:28.042651 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:28.056736 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:28.056819 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:28.095013 2184738 cri.go:89] found id: ""
	I0120 16:36:28.095054 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.095067 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:28.095076 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:28.095148 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:28.132855 2184738 cri.go:89] found id: ""
	I0120 16:36:28.132896 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.132909 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:28.132918 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:28.132996 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:28.173785 2184738 cri.go:89] found id: ""
	I0120 16:36:28.173836 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.173850 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:28.173859 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:28.173931 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:28.214847 2184738 cri.go:89] found id: ""
	I0120 16:36:28.214889 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.214902 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:28.214912 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:28.214995 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:28.253782 2184738 cri.go:89] found id: ""
	I0120 16:36:28.253825 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.253838 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:28.253847 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:28.253937 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:28.304065 2184738 cri.go:89] found id: ""
	I0120 16:36:28.304105 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.304117 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:28.304128 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:28.304210 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:28.355822 2184738 cri.go:89] found id: ""
	I0120 16:36:28.355862 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.355874 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:28.355883 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:28.355964 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:28.402142 2184738 cri.go:89] found id: ""
	I0120 16:36:28.402182 2184738 logs.go:282] 0 containers: []
	W0120 16:36:28.402195 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:28.402207 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:28.402221 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:28.487564 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:28.487611 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:28.541164 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:28.541215 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:28.594924 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:28.594982 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:28.609928 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:28.609963 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:28.689063 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:31.190792 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:31.209109 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:31.209200 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:31.251445 2184738 cri.go:89] found id: ""
	I0120 16:36:31.251482 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.251496 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:31.251504 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:31.251580 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:31.292544 2184738 cri.go:89] found id: ""
	I0120 16:36:31.292572 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.292583 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:31.292591 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:31.292648 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:31.338184 2184738 cri.go:89] found id: ""
	I0120 16:36:31.338218 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.338228 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:31.338237 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:31.338301 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:31.389711 2184738 cri.go:89] found id: ""
	I0120 16:36:31.389746 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.389757 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:31.389766 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:31.389838 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:31.426152 2184738 cri.go:89] found id: ""
	I0120 16:36:31.426187 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.426199 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:31.426208 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:31.426283 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:31.473989 2184738 cri.go:89] found id: ""
	I0120 16:36:31.474041 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.474063 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:31.474073 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:31.474149 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:31.512177 2184738 cri.go:89] found id: ""
	I0120 16:36:31.512207 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.512216 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:31.512221 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:31.512275 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:31.553943 2184738 cri.go:89] found id: ""
	I0120 16:36:31.553972 2184738 logs.go:282] 0 containers: []
	W0120 16:36:31.553980 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:31.553990 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:31.554004 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:31.615173 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:31.615204 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:31.630219 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:31.630255 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:31.706899 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:31.706928 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:31.706952 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:31.794289 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:31.794334 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:34.333657 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:34.348588 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:34.348682 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:34.387495 2184738 cri.go:89] found id: ""
	I0120 16:36:34.387540 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.387554 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:34.387563 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:34.387662 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:34.423356 2184738 cri.go:89] found id: ""
	I0120 16:36:34.423385 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.423394 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:34.423400 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:34.423465 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:34.466708 2184738 cri.go:89] found id: ""
	I0120 16:36:34.466750 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.466763 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:34.466771 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:34.466840 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:34.504610 2184738 cri.go:89] found id: ""
	I0120 16:36:34.504646 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.504657 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:34.504665 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:34.504725 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:34.542283 2184738 cri.go:89] found id: ""
	I0120 16:36:34.542316 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.542329 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:34.542337 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:34.542396 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:34.579769 2184738 cri.go:89] found id: ""
	I0120 16:36:34.579802 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.579810 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:34.579817 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:34.579882 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:34.614154 2184738 cri.go:89] found id: ""
	I0120 16:36:34.614193 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.614205 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:34.614214 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:34.614280 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:34.652696 2184738 cri.go:89] found id: ""
	I0120 16:36:34.652729 2184738 logs.go:282] 0 containers: []
	W0120 16:36:34.652741 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:34.652753 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:34.652769 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:34.708540 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:34.708587 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:34.725836 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:34.725891 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:34.804171 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:34.804199 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:34.804212 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:34.890549 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:34.890598 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:37.433847 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:37.447927 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:36:37.448027 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:36:37.493777 2184738 cri.go:89] found id: ""
	I0120 16:36:37.493836 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.493849 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:36:37.493867 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:36:37.493945 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:36:37.529143 2184738 cri.go:89] found id: ""
	I0120 16:36:37.529178 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.529191 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:36:37.529199 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:36:37.529277 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:36:37.570778 2184738 cri.go:89] found id: ""
	I0120 16:36:37.570809 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.570820 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:36:37.570829 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:36:37.570905 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:36:37.617280 2184738 cri.go:89] found id: ""
	I0120 16:36:37.617310 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.617319 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:36:37.617328 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:36:37.617402 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:36:37.657754 2184738 cri.go:89] found id: ""
	I0120 16:36:37.657793 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.657805 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:36:37.657814 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:36:37.657876 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:36:37.693154 2184738 cri.go:89] found id: ""
	I0120 16:36:37.693194 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.693207 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:36:37.693216 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:36:37.693289 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:36:37.744890 2184738 cri.go:89] found id: ""
	I0120 16:36:37.744922 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.744932 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:36:37.744940 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:36:37.745010 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:36:37.786414 2184738 cri.go:89] found id: ""
	I0120 16:36:37.786448 2184738 logs.go:282] 0 containers: []
	W0120 16:36:37.786458 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:36:37.786468 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:36:37.786485 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:36:37.801155 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:36:37.801196 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:36:37.885547 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:36:37.885581 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:36:37.885601 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 16:36:37.967253 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:36:37.967300 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:36:38.014330 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:36:38.014372 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:36:40.566763 2184738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:36:40.589001 2184738 kubeadm.go:597] duration metric: took 4m4.263685963s to restartPrimaryControlPlane
	W0120 16:36:40.589663 2184738 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 16:36:40.589710 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 16:36:41.126187 2184738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:36:41.146919 2184738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:36:41.162817 2184738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:36:41.178221 2184738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:36:41.178247 2184738 kubeadm.go:157] found existing configuration files:
	
	I0120 16:36:41.178307 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:36:41.195422 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:36:41.195495 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:36:41.209938 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:36:41.222849 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:36:41.222933 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:36:41.237502 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:36:41.251895 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:36:41.251991 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:36:41.267106 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:36:41.281169 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:36:41.281253 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:36:41.297669 2184738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:36:41.573167 2184738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:38:38.296776 2184738 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:38:38.296887 2184738 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:38:38.298357 2184738 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:38:38.298416 2184738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:38:38.298486 2184738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:38:38.298589 2184738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:38:38.298723 2184738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:38:38.298778 2184738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:38:38.300538 2184738 out.go:235]   - Generating certificates and keys ...
	I0120 16:38:38.300647 2184738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:38:38.300738 2184738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:38:38.300879 2184738 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 16:38:38.301008 2184738 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 16:38:38.301128 2184738 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 16:38:38.301247 2184738 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 16:38:38.301315 2184738 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 16:38:38.301365 2184738 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 16:38:38.301429 2184738 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 16:38:38.301494 2184738 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 16:38:38.301531 2184738 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 16:38:38.301580 2184738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:38:38.301621 2184738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:38:38.301664 2184738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:38:38.301757 2184738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:38:38.301810 2184738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:38:38.301901 2184738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:38:38.302022 2184738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:38:38.302078 2184738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:38:38.302197 2184738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:38:38.303831 2184738 out.go:235]   - Booting up control plane ...
	I0120 16:38:38.303927 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:38:38.304020 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:38:38.304091 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:38:38.304189 2184738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:38:38.304418 2184738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:38:38.304466 2184738 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:38:38.304528 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:38:38.304690 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:38:38.304765 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:38:38.304977 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:38:38.305063 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:38:38.305274 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:38:38.305346 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:38:38.305525 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:38:38.305588 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:38:38.305744 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:38:38.305755 2184738 kubeadm.go:310] 
	I0120 16:38:38.305788 2184738 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:38:38.305831 2184738 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:38:38.305838 2184738 kubeadm.go:310] 
	I0120 16:38:38.305866 2184738 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:38:38.305896 2184738 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:38:38.305982 2184738 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:38:38.305988 2184738 kubeadm.go:310] 
	I0120 16:38:38.306073 2184738 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:38:38.306109 2184738 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:38:38.306137 2184738 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:38:38.306143 2184738 kubeadm.go:310] 
	I0120 16:38:38.306287 2184738 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:38:38.306379 2184738 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:38:38.306388 2184738 kubeadm.go:310] 
	I0120 16:38:38.306491 2184738 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:38:38.306590 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:38:38.306697 2184738 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:38:38.306796 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:38:38.306892 2184738 kubeadm.go:310] 
	W0120 16:38:38.306985 2184738 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 16:38:38.307036 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 16:38:39.662195 2184738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.355131678s)
	I0120 16:38:39.662266 2184738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:38:39.679890 2184738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:38:39.691805 2184738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:38:39.691831 2184738 kubeadm.go:157] found existing configuration files:
	
	I0120 16:38:39.691882 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:38:39.701875 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:38:39.701947 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:38:39.713011 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:38:39.722994 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:38:39.723096 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:38:39.733355 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:38:39.743435 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:38:39.743531 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:38:39.755445 2184738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:38:39.766299 2184738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:38:39.766392 2184738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:38:39.777752 2184738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:38:39.859175 2184738 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 16:38:39.859259 2184738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:38:40.023655 2184738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:38:40.023836 2184738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:38:40.023966 2184738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 16:38:40.239283 2184738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:38:40.241280 2184738 out.go:235]   - Generating certificates and keys ...
	I0120 16:38:40.241391 2184738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:38:40.241492 2184738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:38:40.241592 2184738 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 16:38:40.241713 2184738 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 16:38:40.241827 2184738 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 16:38:40.241909 2184738 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 16:38:40.242052 2184738 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 16:38:40.242325 2184738 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 16:38:40.242805 2184738 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 16:38:40.243386 2184738 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 16:38:40.243563 2184738 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 16:38:40.243662 2184738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:38:40.622683 2184738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:38:40.871731 2184738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:38:41.099217 2184738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:38:41.395955 2184738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:38:41.417244 2184738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:38:41.418595 2184738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:38:41.418698 2184738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:38:41.573674 2184738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:38:41.575492 2184738 out.go:235]   - Booting up control plane ...
	I0120 16:38:41.575612 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:38:41.581234 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:38:41.590806 2184738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:38:41.593354 2184738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:38:41.597452 2184738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 16:39:21.600565 2184738 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 16:39:21.601116 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:39:21.601383 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:39:26.602213 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:39:26.602458 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:39:36.603742 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:39:36.604031 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:39:56.604260 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:39:56.604524 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:40:36.602699 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:40:36.602988 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:40:36.603021 2184738 kubeadm.go:310] 
	I0120 16:40:36.603067 2184738 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:40:36.603116 2184738 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:40:36.603126 2184738 kubeadm.go:310] 
	I0120 16:40:36.603199 2184738 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:40:36.603298 2184738 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:40:36.603447 2184738 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:40:36.603461 2184738 kubeadm.go:310] 
	I0120 16:40:36.603602 2184738 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:40:36.603656 2184738 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:40:36.603702 2184738 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:40:36.603712 2184738 kubeadm.go:310] 
	I0120 16:40:36.603856 2184738 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:40:36.604014 2184738 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:40:36.604043 2184738 kubeadm.go:310] 
	I0120 16:40:36.604191 2184738 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:40:36.604269 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:40:36.604344 2184738 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:40:36.604404 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:40:36.604425 2184738 kubeadm.go:310] 
	I0120 16:40:36.605287 2184738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:40:36.605364 2184738 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:40:36.605439 2184738 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:40:36.605487 2184738 kubeadm.go:394] duration metric: took 8m0.334764639s to StartCluster
	I0120 16:40:36.605537 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:40:36.605595 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:40:36.664659 2184738 cri.go:89] found id: ""
	I0120 16:40:36.664699 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.664713 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:40:36.664726 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:40:36.664807 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:40:36.711951 2184738 cri.go:89] found id: ""
	I0120 16:40:36.711989 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.712001 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:40:36.712009 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:40:36.712081 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:40:36.755155 2184738 cri.go:89] found id: ""
	I0120 16:40:36.755191 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.755202 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:40:36.755211 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:40:36.755299 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:40:36.802516 2184738 cri.go:89] found id: ""
	I0120 16:40:36.802549 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.802569 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:40:36.802577 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:40:36.802671 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:40:36.847206 2184738 cri.go:89] found id: ""
	I0120 16:40:36.847247 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.847259 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:40:36.847267 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:40:36.847352 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:40:36.886583 2184738 cri.go:89] found id: ""
	I0120 16:40:36.886640 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.886653 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:40:36.886663 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:40:36.886735 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:40:36.931296 2184738 cri.go:89] found id: ""
	I0120 16:40:36.931331 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.931343 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:40:36.931352 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:40:36.931425 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:40:36.975039 2184738 cri.go:89] found id: ""
	I0120 16:40:36.975087 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.975101 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:40:36.975119 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:40:36.975138 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:40:37.017939 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:40:37.017981 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:40:37.069807 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:40:37.069868 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:40:37.085253 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:40:37.085300 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:40:37.189688 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:40:37.189723 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:40:37.189740 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0120 16:40:37.298875 2184738 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 16:40:37.298952 2184738 out.go:270] * 
	* 
	W0120 16:40:37.299029 2184738 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:40:37.299046 2184738 out.go:270] * 
	* 
	W0120 16:40:37.299964 2184738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:40:37.303768 2184738 out.go:201] 
	W0120 16:40:37.304984 2184738 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:40:37.305032 2184738 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 16:40:37.305051 2184738 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 16:40:37.306457 2184738 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-806597 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (267.565719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-806597 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo cat                    | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo cat                    | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo cat                    | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-708138 sudo                        | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-708138                             | custom-flannel-708138     | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC | 20 Jan 25 16:40 UTC |
	| start   | -p enable-default-cni-708138                         | enable-default-cni-708138 | jenkins | v1.35.0 | 20 Jan 25 16:40 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:40:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:40:20.203963 2193482 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:40:20.204098 2193482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:40:20.204110 2193482 out.go:358] Setting ErrFile to fd 2...
	I0120 16:40:20.204115 2193482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:40:20.204318 2193482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:40:20.205042 2193482 out.go:352] Setting JSON to false
	I0120 16:40:20.206345 2193482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":30166,"bootTime":1737361054,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:40:20.206464 2193482 start.go:139] virtualization: kvm guest
	I0120 16:40:20.208861 2193482 out.go:177] * [enable-default-cni-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:40:20.210349 2193482 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:40:20.210367 2193482 notify.go:220] Checking for updates...
	I0120 16:40:20.213370 2193482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:40:20.214852 2193482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:40:20.216295 2193482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:40:20.217842 2193482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:40:20.219310 2193482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:40:20.221252 2193482 config.go:182] Loaded profile config "calico-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:40:20.221391 2193482 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:40:20.221531 2193482 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:40:20.221668 2193482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:40:20.265333 2193482 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:40:20.266646 2193482 start.go:297] selected driver: kvm2
	I0120 16:40:20.266665 2193482 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:40:20.266683 2193482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:40:20.267782 2193482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:40:20.267894 2193482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:40:20.286227 2193482 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:40:20.286304 2193482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0120 16:40:20.286769 2193482 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0120 16:40:20.286818 2193482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:40:20.286860 2193482 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:40:20.286871 2193482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:40:20.286962 2193482 start.go:340] cluster config:
	{Name:enable-default-cni-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:enable-default-cni-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:40:20.287164 2193482 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:40:20.290261 2193482 out.go:177] * Starting "enable-default-cni-708138" primary control-plane node in "enable-default-cni-708138" cluster
	I0120 16:40:20.291729 2193482 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:40:20.291787 2193482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:40:20.291805 2193482 cache.go:56] Caching tarball of preloaded images
	I0120 16:40:20.291943 2193482 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:40:20.291955 2193482 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:40:20.292109 2193482 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/config.json ...
	I0120 16:40:20.292137 2193482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/config.json: {Name:mk2b9188c652cd83e139371cb83c55522f7b628d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:40:20.292320 2193482 start.go:360] acquireMachinesLock for enable-default-cni-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:40:20.292360 2193482 start.go:364] duration metric: took 20.732µs to acquireMachinesLock for "enable-default-cni-708138"
	I0120 16:40:20.292384 2193482 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:enable-de
fault-cni-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:40:20.292478 2193482 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:40:16.550941 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:18.551342 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:20.551813 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:20.294382 2193482 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:40:20.294629 2193482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:40:20.294693 2193482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:40:20.310844 2193482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0120 16:40:20.311411 2193482 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:40:20.312129 2193482 main.go:141] libmachine: Using API Version  1
	I0120 16:40:20.312155 2193482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:40:20.312574 2193482 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:40:20.312864 2193482 main.go:141] libmachine: (enable-default-cni-708138) Calling .GetMachineName
	I0120 16:40:20.313112 2193482 main.go:141] libmachine: (enable-default-cni-708138) Calling .DriverName
	I0120 16:40:20.313273 2193482 start.go:159] libmachine.API.Create for "enable-default-cni-708138" (driver="kvm2")
	I0120 16:40:20.313312 2193482 client.go:168] LocalClient.Create starting
	I0120 16:40:20.313365 2193482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:40:20.313417 2193482 main.go:141] libmachine: Decoding PEM data...
	I0120 16:40:20.313442 2193482 main.go:141] libmachine: Parsing certificate...
	I0120 16:40:20.313517 2193482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:40:20.313546 2193482 main.go:141] libmachine: Decoding PEM data...
	I0120 16:40:20.313569 2193482 main.go:141] libmachine: Parsing certificate...
	I0120 16:40:20.313598 2193482 main.go:141] libmachine: Running pre-create checks...
	I0120 16:40:20.313619 2193482 main.go:141] libmachine: (enable-default-cni-708138) Calling .PreCreateCheck
	I0120 16:40:20.313991 2193482 main.go:141] libmachine: (enable-default-cni-708138) Calling .GetConfigRaw
	I0120 16:40:20.314478 2193482 main.go:141] libmachine: Creating machine...
	I0120 16:40:20.314494 2193482 main.go:141] libmachine: (enable-default-cni-708138) Calling .Create
	I0120 16:40:20.314746 2193482 main.go:141] libmachine: (enable-default-cni-708138) creating KVM machine...
	I0120 16:40:20.314765 2193482 main.go:141] libmachine: (enable-default-cni-708138) creating network...
	I0120 16:40:20.316275 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | found existing default KVM network
	I0120 16:40:20.317995 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:20.317782 2193505 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201800}
	I0120 16:40:20.318020 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | created network xml: 
	I0120 16:40:20.318031 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | <network>
	I0120 16:40:20.318046 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   <name>mk-enable-default-cni-708138</name>
	I0120 16:40:20.318061 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   <dns enable='no'/>
	I0120 16:40:20.318070 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   
	I0120 16:40:20.318087 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 16:40:20.318096 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |     <dhcp>
	I0120 16:40:20.318109 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 16:40:20.318115 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |     </dhcp>
	I0120 16:40:20.318121 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   </ip>
	I0120 16:40:20.318133 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG |   
	I0120 16:40:20.318142 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | </network>
	I0120 16:40:20.318155 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | 
	I0120 16:40:20.324120 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | trying to create private KVM network mk-enable-default-cni-708138 192.168.39.0/24...
	I0120 16:40:20.411952 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | private KVM network mk-enable-default-cni-708138 192.168.39.0/24 created
	I0120 16:40:20.412011 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:20.411875 2193505 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:40:20.412083 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138 ...
	I0120 16:40:20.412112 2193482 main.go:141] libmachine: (enable-default-cni-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:40:20.412236 2193482 main.go:141] libmachine: (enable-default-cni-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:40:20.720064 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:20.719895 2193505 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138/id_rsa...
	I0120 16:40:20.883169 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:20.883001 2193505 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138/enable-default-cni-708138.rawdisk...
	I0120 16:40:20.883200 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | Writing magic tar header
	I0120 16:40:20.883210 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | Writing SSH key tar header
	I0120 16:40:20.883218 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:20.883138 2193505 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138 ...
	I0120 16:40:20.883324 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138 (perms=drwx------)
	I0120 16:40:20.883343 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:40:20.883351 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138
	I0120 16:40:20.883359 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:40:20.883366 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:40:20.883374 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:40:20.883380 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:40:20.883403 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:40:20.883416 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | checking permissions on dir: /home
	I0120 16:40:20.883428 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:40:20.883437 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | skipping /home - not owner
	I0120 16:40:20.883448 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:40:20.883468 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:40:20.883476 2193482 main.go:141] libmachine: (enable-default-cni-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:40:20.883493 2193482 main.go:141] libmachine: (enable-default-cni-708138) creating domain...
	I0120 16:40:20.884775 2193482 main.go:141] libmachine: (enable-default-cni-708138) define libvirt domain using xml: 
	I0120 16:40:20.884800 2193482 main.go:141] libmachine: (enable-default-cni-708138) <domain type='kvm'>
	I0120 16:40:20.884835 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <name>enable-default-cni-708138</name>
	I0120 16:40:20.884868 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:40:20.884875 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <vcpu>2</vcpu>
	I0120 16:40:20.884884 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <features>
	I0120 16:40:20.884890 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <acpi/>
	I0120 16:40:20.884895 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <apic/>
	I0120 16:40:20.884901 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <pae/>
	I0120 16:40:20.884906 2193482 main.go:141] libmachine: (enable-default-cni-708138)     
	I0120 16:40:20.884911 2193482 main.go:141] libmachine: (enable-default-cni-708138)   </features>
	I0120 16:40:20.884916 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <cpu mode='host-passthrough'>
	I0120 16:40:20.884922 2193482 main.go:141] libmachine: (enable-default-cni-708138)   
	I0120 16:40:20.884928 2193482 main.go:141] libmachine: (enable-default-cni-708138)   </cpu>
	I0120 16:40:20.884933 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <os>
	I0120 16:40:20.884940 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <type>hvm</type>
	I0120 16:40:20.884945 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <boot dev='cdrom'/>
	I0120 16:40:20.884958 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <boot dev='hd'/>
	I0120 16:40:20.884989 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <bootmenu enable='no'/>
	I0120 16:40:20.885015 2193482 main.go:141] libmachine: (enable-default-cni-708138)   </os>
	I0120 16:40:20.885024 2193482 main.go:141] libmachine: (enable-default-cni-708138)   <devices>
	I0120 16:40:20.885036 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <disk type='file' device='cdrom'>
	I0120 16:40:20.885053 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138/boot2docker.iso'/>
	I0120 16:40:20.885066 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:40:20.885079 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <readonly/>
	I0120 16:40:20.885087 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </disk>
	I0120 16:40:20.885100 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <disk type='file' device='disk'>
	I0120 16:40:20.885113 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:40:20.885140 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/enable-default-cni-708138/enable-default-cni-708138.rawdisk'/>
	I0120 16:40:20.885155 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:40:20.885165 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </disk>
	I0120 16:40:20.885175 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <interface type='network'>
	I0120 16:40:20.885188 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <source network='mk-enable-default-cni-708138'/>
	I0120 16:40:20.885199 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <model type='virtio'/>
	I0120 16:40:20.885211 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </interface>
	I0120 16:40:20.885223 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <interface type='network'>
	I0120 16:40:20.885236 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <source network='default'/>
	I0120 16:40:20.885243 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <model type='virtio'/>
	I0120 16:40:20.885253 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </interface>
	I0120 16:40:20.885263 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <serial type='pty'>
	I0120 16:40:20.885286 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <target port='0'/>
	I0120 16:40:20.885310 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </serial>
	I0120 16:40:20.885333 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <console type='pty'>
	I0120 16:40:20.885342 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <target type='serial' port='0'/>
	I0120 16:40:20.885351 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </console>
	I0120 16:40:20.885358 2193482 main.go:141] libmachine: (enable-default-cni-708138)     <rng model='virtio'>
	I0120 16:40:20.885368 2193482 main.go:141] libmachine: (enable-default-cni-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:40:20.885380 2193482 main.go:141] libmachine: (enable-default-cni-708138)     </rng>
	I0120 16:40:20.885388 2193482 main.go:141] libmachine: (enable-default-cni-708138)     
	I0120 16:40:20.885395 2193482 main.go:141] libmachine: (enable-default-cni-708138)     
	I0120 16:40:20.885403 2193482 main.go:141] libmachine: (enable-default-cni-708138)   </devices>
	I0120 16:40:20.885412 2193482 main.go:141] libmachine: (enable-default-cni-708138) </domain>
	I0120 16:40:20.885423 2193482 main.go:141] libmachine: (enable-default-cni-708138) 
	I0120 16:40:20.890134 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:d6:ac:ed in network default
	I0120 16:40:20.890894 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:20.890916 2193482 main.go:141] libmachine: (enable-default-cni-708138) starting domain...
	I0120 16:40:20.890930 2193482 main.go:141] libmachine: (enable-default-cni-708138) ensuring networks are active...
	I0120 16:40:20.891736 2193482 main.go:141] libmachine: (enable-default-cni-708138) Ensuring network default is active
	I0120 16:40:20.892136 2193482 main.go:141] libmachine: (enable-default-cni-708138) Ensuring network mk-enable-default-cni-708138 is active
	I0120 16:40:20.892713 2193482 main.go:141] libmachine: (enable-default-cni-708138) getting domain XML...
	I0120 16:40:20.893436 2193482 main.go:141] libmachine: (enable-default-cni-708138) creating domain...
	I0120 16:40:22.223434 2193482 main.go:141] libmachine: (enable-default-cni-708138) waiting for IP...
	I0120 16:40:22.224334 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:22.224859 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:22.224921 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:22.224839 2193505 retry.go:31] will retry after 216.903711ms: waiting for domain to come up
	I0120 16:40:22.443485 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:22.444292 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:22.444327 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:22.444234 2193505 retry.go:31] will retry after 269.78701ms: waiting for domain to come up
	I0120 16:40:22.715913 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:22.716429 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:22.716495 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:22.716397 2193505 retry.go:31] will retry after 326.617927ms: waiting for domain to come up
	I0120 16:40:23.045051 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:23.045610 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:23.045642 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:23.045582 2193505 retry.go:31] will retry after 452.598811ms: waiting for domain to come up
	I0120 16:40:23.500490 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:23.500965 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:23.501005 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:23.500955 2193505 retry.go:31] will retry after 739.729299ms: waiting for domain to come up
	I0120 16:40:24.241886 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:24.242384 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:24.242446 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:24.242343 2193505 retry.go:31] will retry after 815.401822ms: waiting for domain to come up
	I0120 16:40:25.059433 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:25.060059 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:25.060086 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:25.060034 2193505 retry.go:31] will retry after 851.570536ms: waiting for domain to come up
	I0120 16:40:23.051265 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:25.051530 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:25.912950 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:25.913529 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:25.913561 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:25.913516 2193505 retry.go:31] will retry after 924.039481ms: waiting for domain to come up
	I0120 16:40:26.839825 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:26.840312 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:26.840344 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:26.840253 2193505 retry.go:31] will retry after 1.470081492s: waiting for domain to come up
	I0120 16:40:28.312052 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:28.312635 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:28.312709 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:28.312633 2193505 retry.go:31] will retry after 1.763559322s: waiting for domain to come up
	I0120 16:40:30.078897 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:30.079463 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:30.079508 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:30.079408 2193505 retry.go:31] will retry after 2.126302629s: waiting for domain to come up
	I0120 16:40:27.053067 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:29.552245 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:32.207990 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | domain enable-default-cni-708138 has defined MAC address 52:54:00:7c:70:56 in network mk-enable-default-cni-708138
	I0120 16:40:32.208533 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | unable to find current IP address of domain enable-default-cni-708138 in network mk-enable-default-cni-708138
	I0120 16:40:32.208571 2193482 main.go:141] libmachine: (enable-default-cni-708138) DBG | I0120 16:40:32.208529 2193505 retry.go:31] will retry after 3.401963388s: waiting for domain to come up
	I0120 16:40:31.552322 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:34.050721 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:36.602699 2184738 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 16:40:36.602988 2184738 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 16:40:36.603021 2184738 kubeadm.go:310] 
	I0120 16:40:36.603067 2184738 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 16:40:36.603116 2184738 kubeadm.go:310] 		timed out waiting for the condition
	I0120 16:40:36.603126 2184738 kubeadm.go:310] 
	I0120 16:40:36.603199 2184738 kubeadm.go:310] 	This error is likely caused by:
	I0120 16:40:36.603298 2184738 kubeadm.go:310] 		- The kubelet is not running
	I0120 16:40:36.603447 2184738 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 16:40:36.603461 2184738 kubeadm.go:310] 
	I0120 16:40:36.603602 2184738 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 16:40:36.603656 2184738 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 16:40:36.603702 2184738 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 16:40:36.603712 2184738 kubeadm.go:310] 
	I0120 16:40:36.603856 2184738 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 16:40:36.604014 2184738 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 16:40:36.604043 2184738 kubeadm.go:310] 
	I0120 16:40:36.604191 2184738 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 16:40:36.604269 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 16:40:36.604344 2184738 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 16:40:36.604404 2184738 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 16:40:36.604425 2184738 kubeadm.go:310] 
	I0120 16:40:36.605287 2184738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:40:36.605364 2184738 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 16:40:36.605439 2184738 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 16:40:36.605487 2184738 kubeadm.go:394] duration metric: took 8m0.334764639s to StartCluster
	I0120 16:40:36.605537 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 16:40:36.605595 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 16:40:36.664659 2184738 cri.go:89] found id: ""
	I0120 16:40:36.664699 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.664713 2184738 logs.go:284] No container was found matching "kube-apiserver"
	I0120 16:40:36.664726 2184738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 16:40:36.664807 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 16:40:36.711951 2184738 cri.go:89] found id: ""
	I0120 16:40:36.711989 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.712001 2184738 logs.go:284] No container was found matching "etcd"
	I0120 16:40:36.712009 2184738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 16:40:36.712081 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 16:40:36.755155 2184738 cri.go:89] found id: ""
	I0120 16:40:36.755191 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.755202 2184738 logs.go:284] No container was found matching "coredns"
	I0120 16:40:36.755211 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 16:40:36.755299 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 16:40:36.802516 2184738 cri.go:89] found id: ""
	I0120 16:40:36.802549 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.802569 2184738 logs.go:284] No container was found matching "kube-scheduler"
	I0120 16:40:36.802577 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 16:40:36.802671 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 16:40:36.847206 2184738 cri.go:89] found id: ""
	I0120 16:40:36.847247 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.847259 2184738 logs.go:284] No container was found matching "kube-proxy"
	I0120 16:40:36.847267 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 16:40:36.847352 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 16:40:36.886583 2184738 cri.go:89] found id: ""
	I0120 16:40:36.886640 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.886653 2184738 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 16:40:36.886663 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 16:40:36.886735 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 16:40:36.931296 2184738 cri.go:89] found id: ""
	I0120 16:40:36.931331 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.931343 2184738 logs.go:284] No container was found matching "kindnet"
	I0120 16:40:36.931352 2184738 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 16:40:36.931425 2184738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 16:40:36.975039 2184738 cri.go:89] found id: ""
	I0120 16:40:36.975087 2184738 logs.go:282] 0 containers: []
	W0120 16:40:36.975101 2184738 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 16:40:36.975119 2184738 logs.go:123] Gathering logs for container status ...
	I0120 16:40:36.975138 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 16:40:37.017939 2184738 logs.go:123] Gathering logs for kubelet ...
	I0120 16:40:37.017981 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 16:40:37.069807 2184738 logs.go:123] Gathering logs for dmesg ...
	I0120 16:40:37.069868 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 16:40:37.085253 2184738 logs.go:123] Gathering logs for describe nodes ...
	I0120 16:40:37.085300 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 16:40:37.189688 2184738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 16:40:37.189723 2184738 logs.go:123] Gathering logs for CRI-O ...
	I0120 16:40:37.189740 2184738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0120 16:40:37.298875 2184738 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 16:40:37.298952 2184738 out.go:270] * 
	W0120 16:40:37.299029 2184738 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:40:37.299046 2184738 out.go:270] * 
	W0120 16:40:37.299964 2184738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:40:37.303768 2184738 out.go:201] 
	W0120 16:40:37.304984 2184738 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 16:40:37.305032 2184738 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 16:40:37.305051 2184738 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 16:40:37.306457 2184738 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.301508932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391238301489145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd60b0b7-b52d-4c7f-a7f2-eb5df03f84d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.302291951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6fd64dd-5862-47fc-a040-fb74b6e71d4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.302345004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6fd64dd-5862-47fc-a040-fb74b6e71d4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.302532157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d6fd64dd-5862-47fc-a040-fb74b6e71d4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.336859731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cad9f1c-81ae-4d53-8741-8a01ae49e336 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.336938666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cad9f1c-81ae-4d53-8741-8a01ae49e336 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.338152325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6200e275-bb3b-47e0-a1fb-0304338d2d23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.338723731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391238338693469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6200e275-bb3b-47e0-a1fb-0304338d2d23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.339390392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1640e7d9-51ac-4bff-a63a-4b3dd00c520b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.339443780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1640e7d9-51ac-4bff-a63a-4b3dd00c520b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.339478978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1640e7d9-51ac-4bff-a63a-4b3dd00c520b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.375523982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e808b89-309b-4b58-aa3a-1bc36e17c449 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.375615207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e808b89-309b-4b58-aa3a-1bc36e17c449 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.378997888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96baccff-9798-4e63-a42a-2140b14c8bee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.379378601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391238379358799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96baccff-9798-4e63-a42a-2140b14c8bee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.380170013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e939f48-1bd9-45e5-803d-84f31062c8eb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.380272508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e939f48-1bd9-45e5-803d-84f31062c8eb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.380331712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8e939f48-1bd9-45e5-803d-84f31062c8eb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.415287230Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c6402aa-298c-4680-8e3b-e8bb85ea1f14 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.415376307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c6402aa-298c-4680-8e3b-e8bb85ea1f14 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.416922973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=779526aa-f6bc-46ab-87b1-92f0eabbc7fc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.417296215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391238417266310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=779526aa-f6bc-46ab-87b1-92f0eabbc7fc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.417935273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=255e3e42-999c-4d30-9a97-6d06245e842e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.418009542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=255e3e42-999c-4d30-9a97-6d06245e842e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:40:38 old-k8s-version-806597 crio[634]: time="2025-01-20 16:40:38.418136052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=255e3e42-999c-4d30-9a97-6d06245e842e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 16:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055270] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137706] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.025165] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.690534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.917675] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.063794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072347] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.228149] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.140637] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.243673] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.859627] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059674] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.198086] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.593379] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 16:36] systemd-fstab-generator[5007]: Ignoring "noauto" option for root device
	[Jan20 16:38] systemd-fstab-generator[5282]: Ignoring "noauto" option for root device
	[  +0.065823] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:40:38 up 8 min,  0 users,  load average: 0.01, 0.10, 0.07
	Linux old-k8s-version-806597 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: goroutine 155 [runnable]:
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008728c0)
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: goroutine 156 [select]:
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c53180, 0xc000cfed01, 0xc000bfca80, 0xc000c4b120, 0xc000c56a80, 0xc000c56a40)
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000cfede0, 0x0, 0x0)
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008728c0)
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5459]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 20 16:40:37 old-k8s-version-806597 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 16:40:37 old-k8s-version-806597 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 16:40:37 old-k8s-version-806597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 20 16:40:37 old-k8s-version-806597 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 16:40:37 old-k8s-version-806597 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5529]: I0120 16:40:37.907997    5529 server.go:416] Version: v1.20.0
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5529]: I0120 16:40:37.908348    5529 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5529]: I0120 16:40:37.910494    5529 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5529]: I0120 16:40:37.911596    5529 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 20 16:40:37 old-k8s-version-806597 kubelet[5529]: W0120 16:40:37.911617    5529 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (234.980627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-806597" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (307.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0120 16:37:14.249728 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.815390 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.821871 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.833375 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.854847 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.896281 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:50.977805 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:51.139369 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:51.461124 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:52.103516 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:53.385234 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:37:55.946753 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:01.068449 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.173287 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.179796 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.191227 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.212923 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.254684 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.336325 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.497632 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:04.819101 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:05.460497 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:06.742269 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: exit status 80 (5m7.920768957s)

                                                
                                                
-- stdout --
	* [calico-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "calico-708138" primary control-plane node in "calico-708138" cluster
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:37:10.689687 2189420 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:37:10.689840 2189420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:37:10.689865 2189420 out.go:358] Setting ErrFile to fd 2...
	I0120 16:37:10.689873 2189420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:37:10.690180 2189420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:37:10.690976 2189420 out.go:352] Setting JSON to false
	I0120 16:37:10.692534 2189420 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29977,"bootTime":1737361054,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:37:10.692626 2189420 start.go:139] virtualization: kvm guest
	I0120 16:37:10.694665 2189420 out.go:177] * [calico-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:37:10.696098 2189420 notify.go:220] Checking for updates...
	I0120 16:37:10.696202 2189420 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:37:10.697328 2189420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:37:10.698465 2189420 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:37:10.699590 2189420 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:37:10.700725 2189420 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:37:10.702137 2189420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:37:10.704161 2189420 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:37:10.704268 2189420 config.go:182] Loaded profile config "kindnet-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:37:10.704369 2189420 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:37:10.704455 2189420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:37:10.743602 2189420 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:37:10.744888 2189420 start.go:297] selected driver: kvm2
	I0120 16:37:10.744907 2189420 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:37:10.744934 2189420 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:37:10.745983 2189420 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:37:10.746088 2189420 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:37:10.762875 2189420 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:37:10.762958 2189420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:37:10.763276 2189420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:37:10.763321 2189420 cni.go:84] Creating CNI manager for "calico"
	I0120 16:37:10.763329 2189420 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0120 16:37:10.763412 2189420 start.go:340] cluster config:
	{Name:calico-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0120 16:37:10.763563 2189420 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:37:10.765404 2189420 out.go:177] * Starting "calico-708138" primary control-plane node in "calico-708138" cluster
	I0120 16:37:10.766674 2189420 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:37:10.766727 2189420 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:37:10.766742 2189420 cache.go:56] Caching tarball of preloaded images
	I0120 16:37:10.766896 2189420 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:37:10.766911 2189420 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:37:10.767031 2189420 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/config.json ...
	I0120 16:37:10.767055 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/config.json: {Name:mk3abc6f8aff4ca4a6900864660d539cf91e10a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:37:10.767214 2189420 start.go:360] acquireMachinesLock for calico-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:37:30.367872 2189420 start.go:364] duration metric: took 19.600627874s to acquireMachinesLock for "calico-708138"
	I0120 16:37:30.367967 2189420 start.go:93] Provisioning new machine with config: &{Name:calico-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-708138 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:37:30.368116 2189420 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:37:30.370549 2189420 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:37:30.370785 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:37:30.370847 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:37:30.388819 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0120 16:37:30.389461 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:37:30.390134 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:37:30.390158 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:37:30.390638 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:37:30.390846 2189420 main.go:141] libmachine: (calico-708138) Calling .GetMachineName
	I0120 16:37:30.391011 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:30.391195 2189420 start.go:159] libmachine.API.Create for "calico-708138" (driver="kvm2")
	I0120 16:37:30.391233 2189420 client.go:168] LocalClient.Create starting
	I0120 16:37:30.391275 2189420 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:37:30.391328 2189420 main.go:141] libmachine: Decoding PEM data...
	I0120 16:37:30.391349 2189420 main.go:141] libmachine: Parsing certificate...
	I0120 16:37:30.391425 2189420 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:37:30.391451 2189420 main.go:141] libmachine: Decoding PEM data...
	I0120 16:37:30.391466 2189420 main.go:141] libmachine: Parsing certificate...
	I0120 16:37:30.391492 2189420 main.go:141] libmachine: Running pre-create checks...
	I0120 16:37:30.391516 2189420 main.go:141] libmachine: (calico-708138) Calling .PreCreateCheck
	I0120 16:37:30.391954 2189420 main.go:141] libmachine: (calico-708138) Calling .GetConfigRaw
	I0120 16:37:30.392514 2189420 main.go:141] libmachine: Creating machine...
	I0120 16:37:30.392532 2189420 main.go:141] libmachine: (calico-708138) Calling .Create
	I0120 16:37:30.392674 2189420 main.go:141] libmachine: (calico-708138) creating KVM machine...
	I0120 16:37:30.392694 2189420 main.go:141] libmachine: (calico-708138) creating network...
	I0120 16:37:30.394118 2189420 main.go:141] libmachine: (calico-708138) DBG | found existing default KVM network
	I0120 16:37:30.395783 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.395607 2189579 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6b:0b:58} reservation:<nil>}
	I0120 16:37:30.396878 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.396778 2189579 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:0e:01} reservation:<nil>}
	I0120 16:37:30.397712 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.397645 2189579 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:2c:a8} reservation:<nil>}
	I0120 16:37:30.398918 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.398825 2189579 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038f170}
	I0120 16:37:30.398965 2189420 main.go:141] libmachine: (calico-708138) DBG | created network xml: 
	I0120 16:37:30.398978 2189420 main.go:141] libmachine: (calico-708138) DBG | <network>
	I0120 16:37:30.398990 2189420 main.go:141] libmachine: (calico-708138) DBG |   <name>mk-calico-708138</name>
	I0120 16:37:30.399004 2189420 main.go:141] libmachine: (calico-708138) DBG |   <dns enable='no'/>
	I0120 16:37:30.399010 2189420 main.go:141] libmachine: (calico-708138) DBG |   
	I0120 16:37:30.399022 2189420 main.go:141] libmachine: (calico-708138) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 16:37:30.399033 2189420 main.go:141] libmachine: (calico-708138) DBG |     <dhcp>
	I0120 16:37:30.399043 2189420 main.go:141] libmachine: (calico-708138) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 16:37:30.399051 2189420 main.go:141] libmachine: (calico-708138) DBG |     </dhcp>
	I0120 16:37:30.399056 2189420 main.go:141] libmachine: (calico-708138) DBG |   </ip>
	I0120 16:37:30.399061 2189420 main.go:141] libmachine: (calico-708138) DBG |   
	I0120 16:37:30.399067 2189420 main.go:141] libmachine: (calico-708138) DBG | </network>
	I0120 16:37:30.399074 2189420 main.go:141] libmachine: (calico-708138) DBG | 
	I0120 16:37:30.405453 2189420 main.go:141] libmachine: (calico-708138) DBG | trying to create private KVM network mk-calico-708138 192.168.72.0/24...
	I0120 16:37:30.483248 2189420 main.go:141] libmachine: (calico-708138) DBG | private KVM network mk-calico-708138 192.168.72.0/24 created
	I0120 16:37:30.483355 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.483156 2189579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:37:30.483384 2189420 main.go:141] libmachine: (calico-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138 ...
	I0120 16:37:30.483397 2189420 main.go:141] libmachine: (calico-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:37:30.483427 2189420 main.go:141] libmachine: (calico-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:37:30.783188 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:30.783024 2189579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa...
	I0120 16:37:31.043299 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:31.043114 2189579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/calico-708138.rawdisk...
	I0120 16:37:31.043341 2189420 main.go:141] libmachine: (calico-708138) DBG | Writing magic tar header
	I0120 16:37:31.043359 2189420 main.go:141] libmachine: (calico-708138) DBG | Writing SSH key tar header
	I0120 16:37:31.043371 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:31.043302 2189579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138 ...
	I0120 16:37:31.043466 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138
	I0120 16:37:31.043495 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138 (perms=drwx------)
	I0120 16:37:31.043507 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:37:31.043523 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:37:31.043540 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:37:31.043550 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:37:31.043561 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:37:31.043573 2189420 main.go:141] libmachine: (calico-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:37:31.043582 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:37:31.043598 2189420 main.go:141] libmachine: (calico-708138) creating domain...
	I0120 16:37:31.043609 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:37:31.043619 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:37:31.043629 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:37:31.043637 2189420 main.go:141] libmachine: (calico-708138) DBG | checking permissions on dir: /home
	I0120 16:37:31.043652 2189420 main.go:141] libmachine: (calico-708138) DBG | skipping /home - not owner
	I0120 16:37:31.044999 2189420 main.go:141] libmachine: (calico-708138) define libvirt domain using xml: 
	I0120 16:37:31.045032 2189420 main.go:141] libmachine: (calico-708138) <domain type='kvm'>
	I0120 16:37:31.045043 2189420 main.go:141] libmachine: (calico-708138)   <name>calico-708138</name>
	I0120 16:37:31.045050 2189420 main.go:141] libmachine: (calico-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:37:31.045057 2189420 main.go:141] libmachine: (calico-708138)   <vcpu>2</vcpu>
	I0120 16:37:31.045065 2189420 main.go:141] libmachine: (calico-708138)   <features>
	I0120 16:37:31.045087 2189420 main.go:141] libmachine: (calico-708138)     <acpi/>
	I0120 16:37:31.045097 2189420 main.go:141] libmachine: (calico-708138)     <apic/>
	I0120 16:37:31.045105 2189420 main.go:141] libmachine: (calico-708138)     <pae/>
	I0120 16:37:31.045112 2189420 main.go:141] libmachine: (calico-708138)     
	I0120 16:37:31.045125 2189420 main.go:141] libmachine: (calico-708138)   </features>
	I0120 16:37:31.045137 2189420 main.go:141] libmachine: (calico-708138)   <cpu mode='host-passthrough'>
	I0120 16:37:31.045147 2189420 main.go:141] libmachine: (calico-708138)   
	I0120 16:37:31.045159 2189420 main.go:141] libmachine: (calico-708138)   </cpu>
	I0120 16:37:31.045170 2189420 main.go:141] libmachine: (calico-708138)   <os>
	I0120 16:37:31.045179 2189420 main.go:141] libmachine: (calico-708138)     <type>hvm</type>
	I0120 16:37:31.045189 2189420 main.go:141] libmachine: (calico-708138)     <boot dev='cdrom'/>
	I0120 16:37:31.045198 2189420 main.go:141] libmachine: (calico-708138)     <boot dev='hd'/>
	I0120 16:37:31.045209 2189420 main.go:141] libmachine: (calico-708138)     <bootmenu enable='no'/>
	I0120 16:37:31.045244 2189420 main.go:141] libmachine: (calico-708138)   </os>
	I0120 16:37:31.045303 2189420 main.go:141] libmachine: (calico-708138)   <devices>
	I0120 16:37:31.045323 2189420 main.go:141] libmachine: (calico-708138)     <disk type='file' device='cdrom'>
	I0120 16:37:31.045335 2189420 main.go:141] libmachine: (calico-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/boot2docker.iso'/>
	I0120 16:37:31.045347 2189420 main.go:141] libmachine: (calico-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:37:31.045356 2189420 main.go:141] libmachine: (calico-708138)       <readonly/>
	I0120 16:37:31.045364 2189420 main.go:141] libmachine: (calico-708138)     </disk>
	I0120 16:37:31.045374 2189420 main.go:141] libmachine: (calico-708138)     <disk type='file' device='disk'>
	I0120 16:37:31.045408 2189420 main.go:141] libmachine: (calico-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:37:31.045441 2189420 main.go:141] libmachine: (calico-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/calico-708138.rawdisk'/>
	I0120 16:37:31.045458 2189420 main.go:141] libmachine: (calico-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:37:31.045468 2189420 main.go:141] libmachine: (calico-708138)     </disk>
	I0120 16:37:31.045474 2189420 main.go:141] libmachine: (calico-708138)     <interface type='network'>
	I0120 16:37:31.045481 2189420 main.go:141] libmachine: (calico-708138)       <source network='mk-calico-708138'/>
	I0120 16:37:31.045486 2189420 main.go:141] libmachine: (calico-708138)       <model type='virtio'/>
	I0120 16:37:31.045493 2189420 main.go:141] libmachine: (calico-708138)     </interface>
	I0120 16:37:31.045498 2189420 main.go:141] libmachine: (calico-708138)     <interface type='network'>
	I0120 16:37:31.045505 2189420 main.go:141] libmachine: (calico-708138)       <source network='default'/>
	I0120 16:37:31.045510 2189420 main.go:141] libmachine: (calico-708138)       <model type='virtio'/>
	I0120 16:37:31.045520 2189420 main.go:141] libmachine: (calico-708138)     </interface>
	I0120 16:37:31.045553 2189420 main.go:141] libmachine: (calico-708138)     <serial type='pty'>
	I0120 16:37:31.045577 2189420 main.go:141] libmachine: (calico-708138)       <target port='0'/>
	I0120 16:37:31.045598 2189420 main.go:141] libmachine: (calico-708138)     </serial>
	I0120 16:37:31.045608 2189420 main.go:141] libmachine: (calico-708138)     <console type='pty'>
	I0120 16:37:31.045616 2189420 main.go:141] libmachine: (calico-708138)       <target type='serial' port='0'/>
	I0120 16:37:31.045625 2189420 main.go:141] libmachine: (calico-708138)     </console>
	I0120 16:37:31.045633 2189420 main.go:141] libmachine: (calico-708138)     <rng model='virtio'>
	I0120 16:37:31.045643 2189420 main.go:141] libmachine: (calico-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:37:31.045654 2189420 main.go:141] libmachine: (calico-708138)     </rng>
	I0120 16:37:31.045670 2189420 main.go:141] libmachine: (calico-708138)     
	I0120 16:37:31.045682 2189420 main.go:141] libmachine: (calico-708138)     
	I0120 16:37:31.045689 2189420 main.go:141] libmachine: (calico-708138)   </devices>
	I0120 16:37:31.045700 2189420 main.go:141] libmachine: (calico-708138) </domain>
	I0120 16:37:31.045718 2189420 main.go:141] libmachine: (calico-708138) 
	I0120 16:37:31.049752 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:06:42:af in network default
	I0120 16:37:31.050487 2189420 main.go:141] libmachine: (calico-708138) starting domain...
	I0120 16:37:31.050509 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:31.050516 2189420 main.go:141] libmachine: (calico-708138) ensuring networks are active...
	I0120 16:37:31.051214 2189420 main.go:141] libmachine: (calico-708138) Ensuring network default is active
	I0120 16:37:31.051486 2189420 main.go:141] libmachine: (calico-708138) Ensuring network mk-calico-708138 is active
	I0120 16:37:31.052000 2189420 main.go:141] libmachine: (calico-708138) getting domain XML...
	I0120 16:37:31.052849 2189420 main.go:141] libmachine: (calico-708138) creating domain...
	I0120 16:37:32.517802 2189420 main.go:141] libmachine: (calico-708138) waiting for IP...
	I0120 16:37:32.518786 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:32.519422 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:32.519492 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:32.519415 2189579 retry.go:31] will retry after 224.372142ms: waiting for domain to come up
	I0120 16:37:32.746223 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:32.747123 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:32.747151 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:32.747097 2189579 retry.go:31] will retry after 386.692657ms: waiting for domain to come up
	I0120 16:37:33.136083 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:33.136792 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:33.136824 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:33.136667 2189579 retry.go:31] will retry after 306.692064ms: waiting for domain to come up
	I0120 16:37:33.445531 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:33.446196 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:33.446228 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:33.446169 2189579 retry.go:31] will retry after 388.809879ms: waiting for domain to come up
	I0120 16:37:33.837020 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:33.837752 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:33.837787 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:33.837735 2189579 retry.go:31] will retry after 720.360568ms: waiting for domain to come up
	I0120 16:37:34.559563 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:34.560112 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:34.560150 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:34.560043 2189579 retry.go:31] will retry after 827.884996ms: waiting for domain to come up
	I0120 16:37:35.389369 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:35.389957 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:35.390008 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:35.389930 2189579 retry.go:31] will retry after 1.013549829s: waiting for domain to come up
	I0120 16:37:36.405628 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:36.406129 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:36.406194 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:36.406112 2189579 retry.go:31] will retry after 1.018015795s: waiting for domain to come up
	I0120 16:37:37.425404 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:37.426108 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:37.426185 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:37.426099 2189579 retry.go:31] will retry after 1.694970492s: waiting for domain to come up
	I0120 16:37:39.123079 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:39.123599 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:39.123631 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:39.123571 2189579 retry.go:31] will retry after 2.245684037s: waiting for domain to come up
	I0120 16:37:41.370576 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:41.371166 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:41.371201 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:41.371122 2189579 retry.go:31] will retry after 2.393687431s: waiting for domain to come up
	I0120 16:37:43.766957 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:43.767645 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:43.767678 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:43.767606 2189579 retry.go:31] will retry after 2.458964784s: waiting for domain to come up
	I0120 16:37:46.227720 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:46.228272 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:46.228299 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:46.228237 2189579 retry.go:31] will retry after 3.766020841s: waiting for domain to come up
	I0120 16:37:49.998248 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:49.998725 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find current IP address of domain calico-708138 in network mk-calico-708138
	I0120 16:37:49.998769 2189420 main.go:141] libmachine: (calico-708138) DBG | I0120 16:37:49.998697 2189579 retry.go:31] will retry after 4.151473439s: waiting for domain to come up
	I0120 16:37:54.151599 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.152109 2189420 main.go:141] libmachine: (calico-708138) found domain IP: 192.168.72.179
	I0120 16:37:54.152141 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has current primary IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.152149 2189420 main.go:141] libmachine: (calico-708138) reserving static IP address...
	I0120 16:37:54.152582 2189420 main.go:141] libmachine: (calico-708138) DBG | unable to find host DHCP lease matching {name: "calico-708138", mac: "52:54:00:29:44:1a", ip: "192.168.72.179"} in network mk-calico-708138
	I0120 16:37:54.237505 2189420 main.go:141] libmachine: (calico-708138) reserved static IP address 192.168.72.179 for domain calico-708138
	I0120 16:37:54.237535 2189420 main.go:141] libmachine: (calico-708138) waiting for SSH...
	I0120 16:37:54.237544 2189420 main.go:141] libmachine: (calico-708138) DBG | Getting to WaitForSSH function...
	I0120 16:37:54.240507 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.240980 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.241031 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.241401 2189420 main.go:141] libmachine: (calico-708138) DBG | Using SSH client type: external
	I0120 16:37:54.241423 2189420 main.go:141] libmachine: (calico-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa (-rw-------)
	I0120 16:37:54.241461 2189420 main.go:141] libmachine: (calico-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:37:54.241482 2189420 main.go:141] libmachine: (calico-708138) DBG | About to run SSH command:
	I0120 16:37:54.241495 2189420 main.go:141] libmachine: (calico-708138) DBG | exit 0
	I0120 16:37:54.371087 2189420 main.go:141] libmachine: (calico-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:37:54.371406 2189420 main.go:141] libmachine: (calico-708138) KVM machine creation complete
	I0120 16:37:54.371741 2189420 main.go:141] libmachine: (calico-708138) Calling .GetConfigRaw
	I0120 16:37:54.372443 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:54.372641 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:54.372822 2189420 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:37:54.372858 2189420 main.go:141] libmachine: (calico-708138) Calling .GetState
	I0120 16:37:54.374215 2189420 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:37:54.374229 2189420 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:37:54.374234 2189420 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:37:54.374240 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:54.376838 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.377255 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.377287 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.377415 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:54.377610 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.377783 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.377906 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:54.378105 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:54.378291 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:54.378302 2189420 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:37:54.494108 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:37:54.494135 2189420 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:37:54.494142 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:54.497188 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.497645 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.497686 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.497926 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:54.498150 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.498310 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.498465 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:54.498693 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:54.498928 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:54.498946 2189420 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:37:54.615740 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:37:54.615847 2189420 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:37:54.615861 2189420 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:37:54.615871 2189420 main.go:141] libmachine: (calico-708138) Calling .GetMachineName
	I0120 16:37:54.616172 2189420 buildroot.go:166] provisioning hostname "calico-708138"
	I0120 16:37:54.616206 2189420 main.go:141] libmachine: (calico-708138) Calling .GetMachineName
	I0120 16:37:54.616395 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:54.618995 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.619347 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.619379 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.619525 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:54.619711 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.619870 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.620050 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:54.620239 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:54.620460 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:54.620478 2189420 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-708138 && echo "calico-708138" | sudo tee /etc/hostname
	I0120 16:37:54.752391 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-708138
	
	I0120 16:37:54.752472 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:54.755298 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.755690 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.755736 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.755968 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:54.756187 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.756392 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:54.756540 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:54.756745 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:54.756943 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:54.756965 2189420 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:37:54.879826 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:37:54.879864 2189420 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:37:54.879921 2189420 buildroot.go:174] setting up certificates
	I0120 16:37:54.879937 2189420 provision.go:84] configureAuth start
	I0120 16:37:54.879953 2189420 main.go:141] libmachine: (calico-708138) Calling .GetMachineName
	I0120 16:37:54.880242 2189420 main.go:141] libmachine: (calico-708138) Calling .GetIP
	I0120 16:37:54.882962 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.883317 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.883348 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.883560 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:54.886060 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.886492 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:54.886519 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:54.886644 2189420 provision.go:143] copyHostCerts
	I0120 16:37:54.886706 2189420 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:37:54.886727 2189420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:37:54.886788 2189420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:37:54.886870 2189420 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:37:54.886878 2189420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:37:54.886897 2189420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:37:54.886944 2189420 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:37:54.886951 2189420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:37:54.886967 2189420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:37:54.887012 2189420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.calico-708138 san=[127.0.0.1 192.168.72.179 calico-708138 localhost minikube]
	I0120 16:37:55.108479 2189420 provision.go:177] copyRemoteCerts
	I0120 16:37:55.108545 2189420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:37:55.108573 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.111676 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.111989 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.112034 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.112158 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.112362 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.112529 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.112683 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:37:55.204558 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:37:55.230676 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 16:37:55.255436 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:37:55.280192 2189420 provision.go:87] duration metric: took 400.23183ms to configureAuth
	I0120 16:37:55.280232 2189420 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:37:55.280450 2189420 config.go:182] Loaded profile config "calico-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:37:55.280559 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.283531 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.283969 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.283996 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.284196 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.284406 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.284590 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.284763 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.284932 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:55.285123 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:55.285137 2189420 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:37:55.526865 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:37:55.526899 2189420 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:37:55.526907 2189420 main.go:141] libmachine: (calico-708138) Calling .GetURL
	I0120 16:37:55.528387 2189420 main.go:141] libmachine: (calico-708138) DBG | using libvirt version 6000000
	I0120 16:37:55.531055 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.531352 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.531396 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.531544 2189420 main.go:141] libmachine: Docker is up and running!
	I0120 16:37:55.531560 2189420 main.go:141] libmachine: Reticulating splines...
	I0120 16:37:55.531570 2189420 client.go:171] duration metric: took 25.140324631s to LocalClient.Create
	I0120 16:37:55.531602 2189420 start.go:167] duration metric: took 25.140411195s to libmachine.API.Create "calico-708138"
	I0120 16:37:55.531616 2189420 start.go:293] postStartSetup for "calico-708138" (driver="kvm2")
	I0120 16:37:55.531633 2189420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:37:55.531678 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:55.531942 2189420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:37:55.531969 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.534018 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.534359 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.534412 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.534493 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.534674 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.534826 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.534990 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:37:55.624046 2189420 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:37:55.628875 2189420 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:37:55.628909 2189420 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:37:55.628992 2189420 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:37:55.629096 2189420 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:37:55.629226 2189420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:37:55.640514 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:37:55.665949 2189420 start.go:296] duration metric: took 134.313227ms for postStartSetup
	I0120 16:37:55.666007 2189420 main.go:141] libmachine: (calico-708138) Calling .GetConfigRaw
	I0120 16:37:55.666693 2189420 main.go:141] libmachine: (calico-708138) Calling .GetIP
	I0120 16:37:55.669257 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.669591 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.669617 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.669878 2189420 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/config.json ...
	I0120 16:37:55.670075 2189420 start.go:128] duration metric: took 25.301944485s to createHost
	I0120 16:37:55.670110 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.672351 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.672763 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.672794 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.672900 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.673115 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.673273 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.673428 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.673589 2189420 main.go:141] libmachine: Using SSH client type: native
	I0120 16:37:55.673757 2189420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0120 16:37:55.673768 2189420 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:37:55.787625 2189420 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391075.767830466
	
	I0120 16:37:55.787652 2189420 fix.go:216] guest clock: 1737391075.767830466
	I0120 16:37:55.787660 2189420 fix.go:229] Guest: 2025-01-20 16:37:55.767830466 +0000 UTC Remote: 2025-01-20 16:37:55.670096264 +0000 UTC m=+45.021858484 (delta=97.734202ms)
	I0120 16:37:55.787684 2189420 fix.go:200] guest clock delta is within tolerance: 97.734202ms
	I0120 16:37:55.787688 2189420 start.go:83] releasing machines lock for "calico-708138", held for 25.419786008s
	I0120 16:37:55.787708 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:55.788018 2189420 main.go:141] libmachine: (calico-708138) Calling .GetIP
	I0120 16:37:55.791252 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.791648 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.791681 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.791856 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:55.792418 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:55.792603 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:37:55.792717 2189420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:37:55.792773 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.792848 2189420 ssh_runner.go:195] Run: cat /version.json
	I0120 16:37:55.792879 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:37:55.795489 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.795830 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.795861 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.795881 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.796048 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.796222 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.796344 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:55.796354 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.796373 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:55.796541 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:37:55.796564 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:37:55.796729 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:37:55.796887 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:37:55.797093 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:37:55.902403 2189420 ssh_runner.go:195] Run: systemctl --version
	I0120 16:37:55.909240 2189420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:37:56.071412 2189420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:37:56.078261 2189420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:37:56.078368 2189420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:37:56.095101 2189420 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:37:56.095140 2189420 start.go:495] detecting cgroup driver to use...
	I0120 16:37:56.095237 2189420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:37:56.112406 2189420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:37:56.128049 2189420 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:37:56.128143 2189420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:37:56.143607 2189420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:37:56.159162 2189420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:37:56.281725 2189420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:37:56.444260 2189420 docker.go:233] disabling docker service ...
	I0120 16:37:56.444352 2189420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:37:56.460003 2189420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:37:56.478263 2189420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:37:56.611013 2189420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:37:56.743522 2189420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:37:56.760100 2189420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:37:56.781683 2189420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:37:56.781770 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.793409 2189420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:37:56.793480 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.804669 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.815743 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.828710 2189420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:37:56.840798 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.852413 2189420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.871299 2189420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:37:56.882564 2189420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:37:56.893275 2189420 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:37:56.893356 2189420 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:37:56.908374 2189420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:37:56.918923 2189420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:37:57.046818 2189420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:37:57.148832 2189420 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:37:57.148912 2189420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:37:57.154183 2189420 start.go:563] Will wait 60s for crictl version
	I0120 16:37:57.154244 2189420 ssh_runner.go:195] Run: which crictl
	I0120 16:37:57.159641 2189420 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:37:57.204714 2189420 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:37:57.204817 2189420 ssh_runner.go:195] Run: crio --version
	I0120 16:37:57.235689 2189420 ssh_runner.go:195] Run: crio --version
	I0120 16:37:57.268477 2189420 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:37:57.269914 2189420 main.go:141] libmachine: (calico-708138) Calling .GetIP
	I0120 16:37:57.272753 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:57.273190 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:37:57.273212 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:37:57.273511 2189420 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:37:57.278738 2189420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:37:57.292984 2189420 kubeadm.go:883] updating cluster {Name:calico-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-708138 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:37:57.293159 2189420 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:37:57.293236 2189420 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:37:57.332479 2189420 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:37:57.332569 2189420 ssh_runner.go:195] Run: which lz4
	I0120 16:37:57.337056 2189420 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:37:57.341493 2189420 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:37:57.341534 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:37:58.925650 2189420 crio.go:462] duration metric: took 1.588635841s to copy over tarball
	I0120 16:37:58.925745 2189420 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:38:01.337321 2189420 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.411538734s)
	I0120 16:38:01.337357 2189420 crio.go:469] duration metric: took 2.411670663s to extract the tarball
	I0120 16:38:01.337364 2189420 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:38:01.383275 2189420 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:38:01.433370 2189420 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:38:01.433398 2189420 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:38:01.433408 2189420 kubeadm.go:934] updating node { 192.168.72.179 8443 v1.32.0 crio true true} ...
	I0120 16:38:01.433546 2189420 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:calico-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0120 16:38:01.433642 2189420 ssh_runner.go:195] Run: crio config
	I0120 16:38:01.483895 2189420 cni.go:84] Creating CNI manager for "calico"
	I0120 16:38:01.483925 2189420 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:38:01.483950 2189420 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.179 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-708138 NodeName:calico-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:38:01.484119 2189420 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:38:01.484207 2189420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:38:01.495607 2189420 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:38:01.495693 2189420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:38:01.506222 2189420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 16:38:01.526511 2189420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:38:01.545477 2189420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 16:38:01.566643 2189420 ssh_runner.go:195] Run: grep 192.168.72.179	control-plane.minikube.internal$ /etc/hosts
	I0120 16:38:01.571067 2189420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:38:01.585692 2189420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:38:01.741100 2189420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:38:01.762865 2189420 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138 for IP: 192.168.72.179
	I0120 16:38:01.762893 2189420 certs.go:194] generating shared ca certs ...
	I0120 16:38:01.762913 2189420 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:01.763111 2189420 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:38:01.763163 2189420 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:38:01.763174 2189420 certs.go:256] generating profile certs ...
	I0120 16:38:01.763230 2189420 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.key
	I0120 16:38:01.763255 2189420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.crt with IP's: []
	I0120 16:38:01.946017 2189420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.crt ...
	I0120 16:38:01.946055 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.crt: {Name:mkf8f19f0357a28754e65d5b7f0d310c2b936575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:01.946259 2189420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.key ...
	I0120 16:38:01.946276 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/client.key: {Name:mkc8d6c3c3e23fd9413d041edeee76097983e25b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:01.946414 2189420 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key.64182f50
	I0120 16:38:01.946432 2189420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt.64182f50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.179]
	I0120 16:38:01.995424 2189420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt.64182f50 ...
	I0120 16:38:01.995457 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt.64182f50: {Name:mk610c43298a9f1d2e547274d1de54b5e00f3846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:01.995679 2189420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key.64182f50 ...
	I0120 16:38:01.995699 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key.64182f50: {Name:mk4ed9fec6cc52e8199766bd9b57b5be20320797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:01.995818 2189420 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt.64182f50 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt
	I0120 16:38:01.995955 2189420 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key.64182f50 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key
	I0120 16:38:01.996032 2189420 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.key
	I0120 16:38:01.996054 2189420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.crt with IP's: []
	I0120 16:38:02.342596 2189420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.crt ...
	I0120 16:38:02.342648 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.crt: {Name:mkda5c6913bb5d3961924a996f2e1a6934f1eedc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:02.342853 2189420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.key ...
	I0120 16:38:02.342871 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.key: {Name:mk9a011088b1980b07057f597fc052645a66829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:02.343083 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:38:02.343127 2189420 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:38:02.343135 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:38:02.343157 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:38:02.343181 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:38:02.343202 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:38:02.343239 2189420 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:38:02.343826 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:38:02.376416 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:38:02.404937 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:38:02.439191 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:38:02.469730 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:38:02.508856 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:38:02.538564 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:38:02.565690 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/calico-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 16:38:02.593359 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:38:02.620618 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:38:02.647933 2189420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:38:02.674389 2189420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:38:02.692634 2189420 ssh_runner.go:195] Run: openssl version
	I0120 16:38:02.699369 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:38:02.711294 2189420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:38:02.716431 2189420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:38:02.716515 2189420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:38:02.723274 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:38:02.737253 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:38:02.750568 2189420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:38:02.755963 2189420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:38:02.756054 2189420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:38:02.762553 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:38:02.775423 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:38:02.787276 2189420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:38:02.793970 2189420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:38:02.794054 2189420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:38:02.801421 2189420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:38:02.813686 2189420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:38:02.818983 2189420 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:38:02.819040 2189420 kubeadm.go:392] StartCluster: {Name:calico-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-708138 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:38:02.819221 2189420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:38:02.819280 2189420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:38:02.871829 2189420 cri.go:89] found id: ""
	I0120 16:38:02.871907 2189420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:38:02.888534 2189420 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:38:02.899796 2189420 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:38:02.910716 2189420 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:38:02.910744 2189420 kubeadm.go:157] found existing configuration files:
	
	I0120 16:38:02.910806 2189420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:38:02.920577 2189420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:38:02.920658 2189420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:38:02.931504 2189420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:38:02.942550 2189420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:38:02.942652 2189420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:38:02.952949 2189420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:38:02.962622 2189420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:38:02.962716 2189420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:38:02.972989 2189420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:38:02.983850 2189420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:38:02.983937 2189420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:38:02.994713 2189420 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:38:03.058207 2189420 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:38:03.058296 2189420 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:38:03.188210 2189420 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:38:03.188386 2189420 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:38:03.188543 2189420 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:38:03.197782 2189420 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:38:03.396366 2189420 out.go:235]   - Generating certificates and keys ...
	I0120 16:38:03.396564 2189420 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:38:03.396653 2189420 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:38:03.409207 2189420 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:38:03.572182 2189420 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:38:04.059782 2189420 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:38:04.206547 2189420 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:38:04.278774 2189420 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:38:04.278979 2189420 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-708138 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0120 16:38:04.482257 2189420 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:38:04.482406 2189420 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-708138 localhost] and IPs [192.168.72.179 127.0.0.1 ::1]
	I0120 16:38:04.556182 2189420 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:38:04.610591 2189420 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:38:04.935838 2189420 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:38:04.936125 2189420 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:38:05.038833 2189420 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:38:05.198232 2189420 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:38:05.439143 2189420 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:38:05.514872 2189420 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:38:05.842208 2189420 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:38:05.842848 2189420 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:38:05.845463 2189420 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:38:05.847409 2189420 out.go:235]   - Booting up control plane ...
	I0120 16:38:05.847498 2189420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:38:05.847605 2189420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:38:05.847702 2189420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:38:05.870044 2189420 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:38:05.876989 2189420 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:38:05.877075 2189420 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:38:06.009493 2189420 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:38:06.009653 2189420 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:38:06.510857 2189420 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.597613ms
	I0120 16:38:06.511026 2189420 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:38:12.014326 2189420 kubeadm.go:310] [api-check] The API server is healthy after 5.503249057s
	I0120 16:38:12.026128 2189420 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:38:12.043408 2189420 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:38:12.085161 2189420 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:38:12.085494 2189420 kubeadm.go:310] [mark-control-plane] Marking the node calico-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:38:12.100971 2189420 kubeadm.go:310] [bootstrap-token] Using token: 6wdmq3.dt2mq0zp7u1p1w2x
	I0120 16:38:12.102670 2189420 out.go:235]   - Configuring RBAC rules ...
	I0120 16:38:12.102812 2189420 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:38:12.110092 2189420 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:38:12.123841 2189420 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:38:12.127819 2189420 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:38:12.131273 2189420 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:38:12.135056 2189420 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:38:12.419648 2189420 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:38:12.845979 2189420 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:38:13.419579 2189420 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:38:13.420612 2189420 kubeadm.go:310] 
	I0120 16:38:13.420711 2189420 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:38:13.420722 2189420 kubeadm.go:310] 
	I0120 16:38:13.420803 2189420 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:38:13.420810 2189420 kubeadm.go:310] 
	I0120 16:38:13.420831 2189420 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:38:13.420882 2189420 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:38:13.420926 2189420 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:38:13.420930 2189420 kubeadm.go:310] 
	I0120 16:38:13.421017 2189420 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:38:13.421031 2189420 kubeadm.go:310] 
	I0120 16:38:13.421080 2189420 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:38:13.421085 2189420 kubeadm.go:310] 
	I0120 16:38:13.421160 2189420 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:38:13.421310 2189420 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:38:13.421436 2189420 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:38:13.421447 2189420 kubeadm.go:310] 
	I0120 16:38:13.421519 2189420 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:38:13.421588 2189420 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:38:13.421594 2189420 kubeadm.go:310] 
	I0120 16:38:13.421704 2189420 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6wdmq3.dt2mq0zp7u1p1w2x \
	I0120 16:38:13.421849 2189420 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:38:13.421878 2189420 kubeadm.go:310] 	--control-plane 
	I0120 16:38:13.421887 2189420 kubeadm.go:310] 
	I0120 16:38:13.421997 2189420 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:38:13.422009 2189420 kubeadm.go:310] 
	I0120 16:38:13.422125 2189420 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6wdmq3.dt2mq0zp7u1p1w2x \
	I0120 16:38:13.422274 2189420 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:38:13.423188 2189420 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:38:13.423324 2189420 cni.go:84] Creating CNI manager for "calico"
	I0120 16:38:13.425345 2189420 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0120 16:38:13.427805 2189420 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 16:38:13.427833 2189420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (323422 bytes)
	I0120 16:38:13.453085 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 16:38:15.185501 2189420 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.732368531s)
	I0120 16:38:15.185568 2189420 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:38:15.185656 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:15.185748 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-708138 minikube.k8s.io/updated_at=2025_01_20T16_38_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=calico-708138 minikube.k8s.io/primary=true
	I0120 16:38:15.220126 2189420 ops.go:34] apiserver oom_adj: -16
	I0120 16:38:15.309248 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:15.809837 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:16.310192 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:16.810077 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:17.309642 2189420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:38:17.550320 2189420 kubeadm.go:1113] duration metric: took 2.364705759s to wait for elevateKubeSystemPrivileges
	I0120 16:38:17.550363 2189420 kubeadm.go:394] duration metric: took 14.731327461s to StartCluster
	I0120 16:38:17.550387 2189420 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:17.550471 2189420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:38:17.552846 2189420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:38:17.553142 2189420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:38:17.553156 2189420 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:38:17.553130 2189420 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:38:17.553250 2189420 addons.go:69] Setting storage-provisioner=true in profile "calico-708138"
	I0120 16:38:17.553391 2189420 config.go:182] Loaded profile config "calico-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:38:17.553395 2189420 addons.go:238] Setting addon storage-provisioner=true in "calico-708138"
	I0120 16:38:17.553528 2189420 host.go:66] Checking if "calico-708138" exists ...
	I0120 16:38:17.553257 2189420 addons.go:69] Setting default-storageclass=true in profile "calico-708138"
	I0120 16:38:17.553592 2189420 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-708138"
	I0120 16:38:17.553976 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:38:17.554005 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:38:17.554011 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:38:17.554050 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:38:17.554987 2189420 out.go:177] * Verifying Kubernetes components...
	I0120 16:38:17.556230 2189420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:38:17.577058 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0120 16:38:17.577058 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0120 16:38:17.577823 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:38:17.577909 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:38:17.578631 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:38:17.578643 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:38:17.578654 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:38:17.578661 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:38:17.579088 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:38:17.579150 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:38:17.579290 2189420 main.go:141] libmachine: (calico-708138) Calling .GetState
	I0120 16:38:17.580028 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:38:17.580056 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:38:17.583652 2189420 addons.go:238] Setting addon default-storageclass=true in "calico-708138"
	I0120 16:38:17.583704 2189420 host.go:66] Checking if "calico-708138" exists ...
	I0120 16:38:17.584073 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:38:17.584107 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:38:17.599812 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0120 16:38:17.600333 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:38:17.601030 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:38:17.601055 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:38:17.601491 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:38:17.602205 2189420 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:38:17.602242 2189420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:38:17.602914 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0120 16:38:17.603317 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:38:17.603758 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:38:17.603776 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:38:17.604061 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:38:17.604256 2189420 main.go:141] libmachine: (calico-708138) Calling .GetState
	I0120 16:38:17.605895 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:38:17.608112 2189420 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:38:17.609656 2189420 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:38:17.609672 2189420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:38:17.609689 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:38:17.612982 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:38:17.613538 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:38:17.613573 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:38:17.613843 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:38:17.614021 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:38:17.614210 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:38:17.614365 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:38:17.624321 2189420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0120 16:38:17.624779 2189420 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:38:17.625346 2189420 main.go:141] libmachine: Using API Version  1
	I0120 16:38:17.625376 2189420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:38:17.625759 2189420 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:38:17.625997 2189420 main.go:141] libmachine: (calico-708138) Calling .GetState
	I0120 16:38:17.627807 2189420 main.go:141] libmachine: (calico-708138) Calling .DriverName
	I0120 16:38:17.628033 2189420 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:38:17.628051 2189420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:38:17.628073 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHHostname
	I0120 16:38:17.631658 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:38:17.632344 2189420 main.go:141] libmachine: (calico-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:44:1a", ip: ""} in network mk-calico-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:37:47 +0000 UTC Type:0 Mac:52:54:00:29:44:1a Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-708138 Clientid:01:52:54:00:29:44:1a}
	I0120 16:38:17.632378 2189420 main.go:141] libmachine: (calico-708138) DBG | domain calico-708138 has defined IP address 192.168.72.179 and MAC address 52:54:00:29:44:1a in network mk-calico-708138
	I0120 16:38:17.632604 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHPort
	I0120 16:38:17.632784 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHKeyPath
	I0120 16:38:17.632960 2189420 main.go:141] libmachine: (calico-708138) Calling .GetSSHUsername
	I0120 16:38:17.633133 2189420 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/calico-708138/id_rsa Username:docker}
	I0120 16:38:17.946820 2189420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:38:17.946930 2189420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:38:17.952684 2189420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:38:18.013913 2189420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:38:18.544735 2189420 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0120 16:38:18.544856 2189420 main.go:141] libmachine: Making call to close driver server
	I0120 16:38:18.544886 2189420 main.go:141] libmachine: (calico-708138) Calling .Close
	I0120 16:38:18.545272 2189420 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:38:18.545295 2189420 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:38:18.545310 2189420 main.go:141] libmachine: Making call to close driver server
	I0120 16:38:18.545319 2189420 main.go:141] libmachine: (calico-708138) Calling .Close
	I0120 16:38:18.545771 2189420 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:38:18.545782 2189420 main.go:141] libmachine: (calico-708138) DBG | Closing plugin on server side
	I0120 16:38:18.545792 2189420 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:38:18.546775 2189420 node_ready.go:35] waiting up to 15m0s for node "calico-708138" to be "Ready" ...
	I0120 16:38:18.604576 2189420 main.go:141] libmachine: Making call to close driver server
	I0120 16:38:18.604608 2189420 main.go:141] libmachine: (calico-708138) Calling .Close
	I0120 16:38:18.604981 2189420 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:38:18.605010 2189420 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:38:18.605037 2189420 main.go:141] libmachine: (calico-708138) DBG | Closing plugin on server side
	I0120 16:38:18.902340 2189420 main.go:141] libmachine: Making call to close driver server
	I0120 16:38:18.902369 2189420 main.go:141] libmachine: (calico-708138) Calling .Close
	I0120 16:38:18.902745 2189420 main.go:141] libmachine: (calico-708138) DBG | Closing plugin on server side
	I0120 16:38:18.902756 2189420 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:38:18.902773 2189420 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:38:18.902785 2189420 main.go:141] libmachine: Making call to close driver server
	I0120 16:38:18.902797 2189420 main.go:141] libmachine: (calico-708138) Calling .Close
	I0120 16:38:18.903067 2189420 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:38:18.903144 2189420 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:38:18.903111 2189420 main.go:141] libmachine: (calico-708138) DBG | Closing plugin on server side
	I0120 16:38:18.904896 2189420 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0120 16:38:18.906818 2189420 addons.go:514] duration metric: took 1.353652022s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0120 16:38:19.054483 2189420 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-708138" context rescaled to 1 replicas
	I0120 16:38:20.550875 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:22.551348 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:25.051758 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:27.551636 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:30.051098 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:32.051208 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:34.551597 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:37.051544 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:39.551289 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:41.554596 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:44.051771 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:46.052021 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:48.551298 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:51.051373 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:53.051731 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:55.051820 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:38:57.551808 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:00.051064 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:02.051356 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:04.551640 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:07.050711 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:09.552608 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:12.051273 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:14.052221 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:16.551001 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:18.551162 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:21.052679 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:23.052800 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:25.552261 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:28.051033 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:30.551654 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:32.552391 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:35.051800 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:37.052672 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:39.551956 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:41.552286 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:44.051212 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:46.551077 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:48.551333 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:51.051388 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:53.550235 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:55.552068 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:39:58.050861 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:00.051665 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:02.054914 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:04.551012 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:06.551932 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:09.052271 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:11.552717 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:14.051668 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:16.550941 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:18.551342 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:20.551813 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:23.051265 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:25.051530 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:27.053067 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:29.552245 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:31.552322 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:34.050721 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:36.051224 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:38.051603 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:40.550801 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:42.550898 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:44.551273 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:46.552019 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:49.052713 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:51.552464 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:54.050837 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:56.551246 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:40:59.051020 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:01.051709 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:03.550836 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:05.551480 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:07.552146 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:10.051010 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:12.051562 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:14.551569 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:17.051237 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:19.052644 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:21.551560 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:24.052288 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:26.551780 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:29.050537 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:31.051364 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:33.551645 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:36.050972 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:38.051422 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:40.550989 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:43.051041 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:45.051097 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:47.551054 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:50.052093 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:52.551625 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:54.552381 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:57.051238 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:41:59.552087 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:02.051137 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:04.051672 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:06.051780 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:08.551694 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:11.050973 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:13.051511 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:15.551946 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:18.050630 2189420 node_ready.go:53] node "calico-708138" has status "Ready":"False"
	I0120 16:42:18.550830 2189420 node_ready.go:38] duration metric: took 4m0.004021109s for node "calico-708138" to be "Ready" ...
	I0120 16:42:18.552811 2189420 out.go:201] 
	W0120 16:42:18.553983 2189420 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0120 16:42:18.553998 2189420 out.go:270] * 
	* 
	W0120 16:42:18.554896 2189420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:42:18.556653 2189420 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (307.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:40:48.032850 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:41:08.734699 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:41:41.344263 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.350743 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.362184 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.383772 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.425298 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.506783 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:41.668411 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:41:41.989742 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:41:42.631949 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:41:43.913479 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:41:51.597087 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:42:01.839472 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:42:22.321711 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:42:50.815661 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:03.283479 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:04.173402 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:07.247552 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.254057 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.265501 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.287009 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.329153 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.410681 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.572344 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:43:07.894166 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:08.536425 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:09.818194 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:12.380174 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:17.501834 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:18.517680 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:27.743847 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:31.875045 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:43:48.225219 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:25.205511 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:29.187602 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:45.663830 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:52.134441 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.140941 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.152420 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.173926 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.215524 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.297118 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.458740 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:44:52.780542 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:53.422675 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:54.704443 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:44:57.266808 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:45:02.388792 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:45:12.630932 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:45:33.113300 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:45:51.109422 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:14.075620 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:41.344118 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:46.006505 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.012880 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.024231 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.045658 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.087079 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.168645 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.330252 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:46:46.652123 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:47.294272 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:48.575851 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:51.137162 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:46:56.259230 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:06.501028 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:09.047201 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:14.250115 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:26.983018 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:35.997901 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:47:50.815591 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:04.172679 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:07.247531 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:07.944429 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:34.950853 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:41.932920 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:41.939398 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:41.950716 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:41.972204 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:42.013729 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:42.095219 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:42.257114 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:48:42.578962 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:43.220829 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:44.503006 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:47.064709 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:48:52.186094 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:49:02.427854 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:49:22.909610 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:49:29.866528 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (238.629593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-806597" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (231.044094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-806597 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo docker                        | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo find                          | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo crio                          | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-708138                                    | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:42:32.008473 2197206 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:42:32.008621 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008635 2197206 out.go:358] Setting ErrFile to fd 2...
	I0120 16:42:32.008642 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008834 2197206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:42:32.009438 2197206 out.go:352] Setting JSON to false
	I0120 16:42:32.010574 2197206 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":30298,"bootTime":1737361054,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:42:32.010728 2197206 start.go:139] virtualization: kvm guest
	I0120 16:42:32.013230 2197206 out.go:177] * [bridge-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:42:32.014892 2197206 notify.go:220] Checking for updates...
	I0120 16:42:32.014906 2197206 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:42:32.016448 2197206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:42:32.017869 2197206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:42:32.019315 2197206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:32.020696 2197206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:42:32.022005 2197206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:42:32.023905 2197206 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024041 2197206 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024168 2197206 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:42:32.024283 2197206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:42:32.065664 2197206 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:42:32.067124 2197206 start.go:297] selected driver: kvm2
	I0120 16:42:32.067147 2197206 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:42:32.067160 2197206 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:42:32.067963 2197206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.068068 2197206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:42:32.087530 2197206 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:42:32.087602 2197206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:42:32.087872 2197206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:42:32.087908 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:42:32.087916 2197206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:42:32.087987 2197206 start.go:340] cluster config:
	{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0120 16:42:32.088138 2197206 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.090276 2197206 out.go:177] * Starting "bridge-708138" primary control-plane node in "bridge-708138" cluster
	I0120 16:42:30.420362 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:30.421027 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:30.421059 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:30.420994 2195575 retry.go:31] will retry after 3.907613054s: waiting for domain to come up
	I0120 16:42:32.091652 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:32.091722 2197206 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:42:32.091737 2197206 cache.go:56] Caching tarball of preloaded images
	I0120 16:42:32.091846 2197206 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:42:32.091859 2197206 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:42:32.091963 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:42:32.091983 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json: {Name:mk67d90943d59835916cc1f1dddad0547daa252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:32.092126 2197206 start.go:360] acquireMachinesLock for bridge-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:42:34.330849 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:34.331412 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:34.331455 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:34.331358 2195575 retry.go:31] will retry after 5.584556774s: waiting for domain to come up
	I0120 16:42:41.479851 2197206 start.go:364] duration metric: took 9.387696864s to acquireMachinesLock for "bridge-708138"
	I0120 16:42:41.479942 2197206 start.go:93] Provisioning new machine with config: &{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:42:41.480071 2197206 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:42:41.482328 2197206 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:42:41.482654 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:42:41.482727 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:42:41.499933 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0120 16:42:41.500357 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:42:41.500878 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:42:41.500905 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:42:41.501247 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:42:41.501477 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:42:41.501622 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:42:41.501777 2197206 start.go:159] libmachine.API.Create for "bridge-708138" (driver="kvm2")
	I0120 16:42:41.501811 2197206 client.go:168] LocalClient.Create starting
	I0120 16:42:41.501865 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:42:41.501911 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.501942 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502018 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:42:41.502048 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.502079 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502119 2197206 main.go:141] libmachine: Running pre-create checks...
	I0120 16:42:41.502134 2197206 main.go:141] libmachine: (bridge-708138) Calling .PreCreateCheck
	I0120 16:42:41.502482 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:42:41.503075 2197206 main.go:141] libmachine: Creating machine...
	I0120 16:42:41.503098 2197206 main.go:141] libmachine: (bridge-708138) Calling .Create
	I0120 16:42:41.503237 2197206 main.go:141] libmachine: (bridge-708138) creating KVM machine...
	I0120 16:42:41.503270 2197206 main.go:141] libmachine: (bridge-708138) creating network...
	I0120 16:42:41.504580 2197206 main.go:141] libmachine: (bridge-708138) DBG | found existing default KVM network
	I0120 16:42:41.506204 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.505980 2197289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:9b:e5} reservation:<nil>}
	I0120 16:42:41.507221 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.507124 2197289 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:0e:01} reservation:<nil>}
	I0120 16:42:41.508246 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.508159 2197289 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:2c:a8} reservation:<nil>}
	I0120 16:42:41.509727 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.509645 2197289 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a19c0}
	I0120 16:42:41.509792 2197206 main.go:141] libmachine: (bridge-708138) DBG | created network xml: 
	I0120 16:42:41.509817 2197206 main.go:141] libmachine: (bridge-708138) DBG | <network>
	I0120 16:42:41.509828 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <name>mk-bridge-708138</name>
	I0120 16:42:41.509848 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <dns enable='no'/>
	I0120 16:42:41.509881 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509906 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 16:42:41.509920 2197206 main.go:141] libmachine: (bridge-708138) DBG |     <dhcp>
	I0120 16:42:41.509931 2197206 main.go:141] libmachine: (bridge-708138) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 16:42:41.509939 2197206 main.go:141] libmachine: (bridge-708138) DBG |     </dhcp>
	I0120 16:42:41.509943 2197206 main.go:141] libmachine: (bridge-708138) DBG |   </ip>
	I0120 16:42:41.509948 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509953 2197206 main.go:141] libmachine: (bridge-708138) DBG | </network>
	I0120 16:42:41.509966 2197206 main.go:141] libmachine: (bridge-708138) DBG | 
	I0120 16:42:41.515816 2197206 main.go:141] libmachine: (bridge-708138) DBG | trying to create private KVM network mk-bridge-708138 192.168.72.0/24...
	I0120 16:42:41.591057 2197206 main.go:141] libmachine: (bridge-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:41.591103 2197206 main.go:141] libmachine: (bridge-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:42:41.591115 2197206 main.go:141] libmachine: (bridge-708138) DBG | private KVM network mk-bridge-708138 192.168.72.0/24 created
	I0120 16:42:41.591137 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.590985 2197289 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:41.591176 2197206 main.go:141] libmachine: (bridge-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:42:41.878512 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.878362 2197289 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa...
	I0120 16:42:39.917690 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918271 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has current primary IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918301 2195552 main.go:141] libmachine: (flannel-708138) found domain IP: 192.168.39.206
	I0120 16:42:39.918314 2195552 main.go:141] libmachine: (flannel-708138) reserving static IP address...
	I0120 16:42:39.918709 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find host DHCP lease matching {name: "flannel-708138", mac: "52:54:00:ff:a2:3d", ip: "192.168.39.206"} in network mk-flannel-708138
	I0120 16:42:40.002772 2195552 main.go:141] libmachine: (flannel-708138) DBG | Getting to WaitForSSH function...
	I0120 16:42:40.002812 2195552 main.go:141] libmachine: (flannel-708138) reserved static IP address 192.168.39.206 for domain flannel-708138
	I0120 16:42:40.002826 2195552 main.go:141] libmachine: (flannel-708138) waiting for SSH...
	I0120 16:42:40.005462 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.005818 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.005841 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.006030 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH client type: external
	I0120 16:42:40.006070 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa (-rw-------)
	I0120 16:42:40.006114 2195552 main.go:141] libmachine: (flannel-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:42:40.006136 2195552 main.go:141] libmachine: (flannel-708138) DBG | About to run SSH command:
	I0120 16:42:40.006152 2195552 main.go:141] libmachine: (flannel-708138) DBG | exit 0
	I0120 16:42:40.135269 2195552 main.go:141] libmachine: (flannel-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:42:40.135526 2195552 main.go:141] libmachine: (flannel-708138) KVM machine creation complete
	I0120 16:42:40.135876 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:40.136615 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.136828 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.137011 2195552 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:42:40.137029 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:42:40.138406 2195552 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:42:40.138423 2195552 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:42:40.138452 2195552 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:42:40.138464 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.140844 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141163 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.141205 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141321 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.141497 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141697 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141855 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.142022 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.142224 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.142236 2195552 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:42:40.250660 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.250692 2195552 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:42:40.250703 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.253520 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.253863 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.253919 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.254020 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.254263 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254462 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254593 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.254769 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.254954 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.254966 2195552 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:42:40.371879 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:42:40.371990 2195552 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:42:40.372011 2195552 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:42:40.372023 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372291 2195552 buildroot.go:166] provisioning hostname "flannel-708138"
	I0120 16:42:40.372320 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372554 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.375287 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375686 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.375717 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375925 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.376151 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376353 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376496 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.376659 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.376836 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.376848 2195552 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-708138 && echo "flannel-708138" | sudo tee /etc/hostname
	I0120 16:42:40.501787 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-708138
	
	I0120 16:42:40.501820 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.504836 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505242 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.505267 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.505652 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505809 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505915 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.506087 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.506277 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.506293 2195552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:42:40.628479 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.628514 2195552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:42:40.628580 2195552 buildroot.go:174] setting up certificates
	I0120 16:42:40.628599 2195552 provision.go:84] configureAuth start
	I0120 16:42:40.628618 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.628897 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:40.631696 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632058 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.632103 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632242 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.634596 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.634957 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.634983 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.635147 2195552 provision.go:143] copyHostCerts
	I0120 16:42:40.635203 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:42:40.635213 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:42:40.635282 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:42:40.635416 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:42:40.635427 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:42:40.635466 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:42:40.635533 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:42:40.635540 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:42:40.635560 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:42:40.635622 2195552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.flannel-708138 san=[127.0.0.1 192.168.39.206 flannel-708138 localhost minikube]
	I0120 16:42:40.788476 2195552 provision.go:177] copyRemoteCerts
	I0120 16:42:40.788537 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:42:40.788565 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.791448 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.791862 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.791889 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.792091 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.792295 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.792425 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.792541 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:40.877555 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:42:40.904115 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0120 16:42:40.933842 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:42:40.962366 2195552 provision.go:87] duration metric: took 333.749236ms to configureAuth
	I0120 16:42:40.962401 2195552 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:42:40.962639 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:40.962740 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.965753 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966102 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.966137 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966346 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.966578 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966794 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966936 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.967135 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.967319 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.967333 2195552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:42:41.219615 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:42:41.219649 2195552 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:42:41.219660 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetURL
	I0120 16:42:41.220953 2195552 main.go:141] libmachine: (flannel-708138) DBG | using libvirt version 6000000
	I0120 16:42:41.223183 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223607 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.223639 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223729 2195552 main.go:141] libmachine: Docker is up and running!
	I0120 16:42:41.223743 2195552 main.go:141] libmachine: Reticulating splines...
	I0120 16:42:41.223752 2195552 client.go:171] duration metric: took 27.127384878s to LocalClient.Create
	I0120 16:42:41.223781 2195552 start.go:167] duration metric: took 27.127453023s to libmachine.API.Create "flannel-708138"
	I0120 16:42:41.223794 2195552 start.go:293] postStartSetup for "flannel-708138" (driver="kvm2")
	I0120 16:42:41.223803 2195552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:42:41.223831 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.224099 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:42:41.224137 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.226284 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226568 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.226594 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226810 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.226999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.227158 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.227283 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.313516 2195552 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:42:41.318553 2195552 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:42:41.318588 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:42:41.318691 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:42:41.318822 2195552 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:42:41.318966 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:42:41.329039 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:41.359288 2195552 start.go:296] duration metric: took 135.474673ms for postStartSetup
	I0120 16:42:41.359376 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:41.360116 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.363418 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.363768 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.363797 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.364037 2195552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json ...
	I0120 16:42:41.364306 2195552 start.go:128] duration metric: took 27.289215285s to createHost
	I0120 16:42:41.364339 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.366928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367308 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.367345 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367538 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.367729 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367894 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.368153 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:41.368324 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:41.368333 2195552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:42:41.479683 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391361.443756218
	
	I0120 16:42:41.479715 2195552 fix.go:216] guest clock: 1737391361.443756218
	I0120 16:42:41.479725 2195552 fix.go:229] Guest: 2025-01-20 16:42:41.443756218 +0000 UTC Remote: 2025-01-20 16:42:41.364324183 +0000 UTC m=+27.417363622 (delta=79.432035ms)
	I0120 16:42:41.479753 2195552 fix.go:200] guest clock delta is within tolerance: 79.432035ms
	I0120 16:42:41.479760 2195552 start.go:83] releasing machines lock for "flannel-708138", held for 27.404795771s
	I0120 16:42:41.479795 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.480084 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.483114 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483496 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.483519 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483702 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484306 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484533 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484636 2195552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:42:41.484681 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.484751 2195552 ssh_runner.go:195] Run: cat /version.json
	I0120 16:42:41.484776 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.487833 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.487927 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488372 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488399 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488422 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488436 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488512 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488602 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488694 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488757 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488853 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.488899 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.489003 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.489094 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.599954 2195552 ssh_runner.go:195] Run: systemctl --version
	I0120 16:42:41.607089 2195552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:42:41.776515 2195552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:42:41.783949 2195552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:42:41.784065 2195552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:42:41.801321 2195552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:42:41.801352 2195552 start.go:495] detecting cgroup driver to use...
	I0120 16:42:41.801424 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:42:41.819201 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:42:41.834731 2195552 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:42:41.834824 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:42:41.850093 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:42:41.865030 2195552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:42:41.992116 2195552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:42:42.163387 2195552 docker.go:233] disabling docker service ...
	I0120 16:42:42.163482 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:42:42.179064 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:42:42.194832 2195552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:42:42.325738 2195552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:42:42.463211 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:42:42.478104 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:42:42.498097 2195552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:42:42.498191 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.510081 2195552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:42:42.510166 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.523170 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.535401 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.550805 2195552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:42:42.563405 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.575131 2195552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.594402 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.606285 2195552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:42:42.616785 2195552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:42:42.616863 2195552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:42:42.631836 2195552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:42:42.643068 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:42.774308 2195552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:42:42.883190 2195552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:42:42.883286 2195552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:42:42.889890 2195552 start.go:563] Will wait 60s for crictl version
	I0120 16:42:42.889963 2195552 ssh_runner.go:195] Run: which crictl
	I0120 16:42:42.895340 2195552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:42:42.953318 2195552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:42:42.953426 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:42.988671 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:43.023504 2195552 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:42:43.024796 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:43.030238 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.030849 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:43.030886 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.031145 2195552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:42:43.036477 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:43.051619 2195552 kubeadm.go:883] updating cluster {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:42:43.051797 2195552 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:43.051875 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:43.095932 2195552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:42:43.096025 2195552 ssh_runner.go:195] Run: which lz4
	I0120 16:42:43.101037 2195552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:42:43.106099 2195552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:42:43.106139 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:42:42.022498 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022333 2197289 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk...
	I0120 16:42:42.022537 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing magic tar header
	I0120 16:42:42.022550 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing SSH key tar header
	I0120 16:42:42.022558 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022472 2197289 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:42.022576 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138
	I0120 16:42:42.022676 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 (perms=drwx------)
	I0120 16:42:42.022704 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:42:42.022716 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:42:42.022728 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:42.022745 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:42:42.022762 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:42:42.022771 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:42:42.022780 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home
	I0120 16:42:42.022821 2197206 main.go:141] libmachine: (bridge-708138) DBG | skipping /home - not owner
	I0120 16:42:42.022845 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:42:42.022858 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:42:42.022869 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:42:42.022883 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:42:42.022898 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:42.024254 2197206 main.go:141] libmachine: (bridge-708138) define libvirt domain using xml: 
	I0120 16:42:42.024299 2197206 main.go:141] libmachine: (bridge-708138) <domain type='kvm'>
	I0120 16:42:42.024309 2197206 main.go:141] libmachine: (bridge-708138)   <name>bridge-708138</name>
	I0120 16:42:42.024317 2197206 main.go:141] libmachine: (bridge-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:42:42.024329 2197206 main.go:141] libmachine: (bridge-708138)   <vcpu>2</vcpu>
	I0120 16:42:42.024341 2197206 main.go:141] libmachine: (bridge-708138)   <features>
	I0120 16:42:42.024352 2197206 main.go:141] libmachine: (bridge-708138)     <acpi/>
	I0120 16:42:42.024360 2197206 main.go:141] libmachine: (bridge-708138)     <apic/>
	I0120 16:42:42.024370 2197206 main.go:141] libmachine: (bridge-708138)     <pae/>
	I0120 16:42:42.024375 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024382 2197206 main.go:141] libmachine: (bridge-708138)   </features>
	I0120 16:42:42.024395 2197206 main.go:141] libmachine: (bridge-708138)   <cpu mode='host-passthrough'>
	I0120 16:42:42.024433 2197206 main.go:141] libmachine: (bridge-708138)   
	I0120 16:42:42.024460 2197206 main.go:141] libmachine: (bridge-708138)   </cpu>
	I0120 16:42:42.024482 2197206 main.go:141] libmachine: (bridge-708138)   <os>
	I0120 16:42:42.024498 2197206 main.go:141] libmachine: (bridge-708138)     <type>hvm</type>
	I0120 16:42:42.024508 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='cdrom'/>
	I0120 16:42:42.024514 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='hd'/>
	I0120 16:42:42.024522 2197206 main.go:141] libmachine: (bridge-708138)     <bootmenu enable='no'/>
	I0120 16:42:42.024526 2197206 main.go:141] libmachine: (bridge-708138)   </os>
	I0120 16:42:42.024533 2197206 main.go:141] libmachine: (bridge-708138)   <devices>
	I0120 16:42:42.024544 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='cdrom'>
	I0120 16:42:42.024558 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/boot2docker.iso'/>
	I0120 16:42:42.024574 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:42:42.024583 2197206 main.go:141] libmachine: (bridge-708138)       <readonly/>
	I0120 16:42:42.024604 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024617 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='disk'>
	I0120 16:42:42.024629 2197206 main.go:141] libmachine: (bridge-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:42:42.024646 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk'/>
	I0120 16:42:42.024661 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:42:42.024672 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024682 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024691 2197206 main.go:141] libmachine: (bridge-708138)       <source network='mk-bridge-708138'/>
	I0120 16:42:42.024701 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024709 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024723 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024747 2197206 main.go:141] libmachine: (bridge-708138)       <source network='default'/>
	I0120 16:42:42.024765 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024776 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024786 2197206 main.go:141] libmachine: (bridge-708138)     <serial type='pty'>
	I0120 16:42:42.024791 2197206 main.go:141] libmachine: (bridge-708138)       <target port='0'/>
	I0120 16:42:42.024796 2197206 main.go:141] libmachine: (bridge-708138)     </serial>
	I0120 16:42:42.024802 2197206 main.go:141] libmachine: (bridge-708138)     <console type='pty'>
	I0120 16:42:42.024807 2197206 main.go:141] libmachine: (bridge-708138)       <target type='serial' port='0'/>
	I0120 16:42:42.024814 2197206 main.go:141] libmachine: (bridge-708138)     </console>
	I0120 16:42:42.024823 2197206 main.go:141] libmachine: (bridge-708138)     <rng model='virtio'>
	I0120 16:42:42.024843 2197206 main.go:141] libmachine: (bridge-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:42:42.024857 2197206 main.go:141] libmachine: (bridge-708138)     </rng>
	I0120 16:42:42.024871 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024886 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024898 2197206 main.go:141] libmachine: (bridge-708138)   </devices>
	I0120 16:42:42.024905 2197206 main.go:141] libmachine: (bridge-708138) </domain>
	I0120 16:42:42.024917 2197206 main.go:141] libmachine: (bridge-708138) 
	I0120 16:42:42.029557 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:92:a4:fd in network default
	I0120 16:42:42.030218 2197206 main.go:141] libmachine: (bridge-708138) starting domain...
	I0120 16:42:42.030248 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:42.030257 2197206 main.go:141] libmachine: (bridge-708138) ensuring networks are active...
	I0120 16:42:42.031044 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network default is active
	I0120 16:42:42.031601 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network mk-bridge-708138 is active
	I0120 16:42:42.032382 2197206 main.go:141] libmachine: (bridge-708138) getting domain XML...
	I0120 16:42:42.033582 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:43.399268 2197206 main.go:141] libmachine: (bridge-708138) waiting for IP...
	I0120 16:42:43.400313 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.400849 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.400943 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.400854 2197289 retry.go:31] will retry after 255.464218ms: waiting for domain to come up
	I0120 16:42:43.658464 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.659186 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.659219 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.659154 2197289 retry.go:31] will retry after 266.392686ms: waiting for domain to come up
	I0120 16:42:43.928079 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.928991 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.929026 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.928961 2197289 retry.go:31] will retry after 451.40279ms: waiting for domain to come up
	I0120 16:42:44.382040 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.382828 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.382874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.382787 2197289 retry.go:31] will retry after 443.359812ms: waiting for domain to come up
	I0120 16:42:44.827744 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.828300 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.828402 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.828290 2197289 retry.go:31] will retry after 735.012761ms: waiting for domain to come up
	I0120 16:42:45.565132 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:45.565770 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:45.565798 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:45.565735 2197289 retry.go:31] will retry after 744.342493ms: waiting for domain to come up
	I0120 16:42:46.311596 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:46.312274 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:46.312307 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:46.312254 2197289 retry.go:31] will retry after 1.044734911s: waiting for domain to come up
	I0120 16:42:44.760474 2195552 crio.go:462] duration metric: took 1.659486395s to copy over tarball
	I0120 16:42:44.760562 2195552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:42:47.285354 2195552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.524736784s)
	I0120 16:42:47.285446 2195552 crio.go:469] duration metric: took 2.524929922s to extract the tarball
	I0120 16:42:47.285471 2195552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:42:47.324858 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:47.372415 2195552 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:42:47.372446 2195552 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:42:47.372457 2195552 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.32.0 crio true true} ...
	I0120 16:42:47.372643 2195552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0120 16:42:47.372722 2195552 ssh_runner.go:195] Run: crio config
	I0120 16:42:47.422488 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:42:47.422519 2195552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:42:47.422554 2195552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-708138 NodeName:flannel-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:42:47.422786 2195552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:42:47.422890 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:42:47.433846 2195552 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:42:47.433938 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:42:47.444578 2195552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0120 16:42:47.461856 2195552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:42:47.478765 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 16:42:47.495925 2195552 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0120 16:42:47.500231 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:47.513503 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:47.646909 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:42:47.666731 2195552 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138 for IP: 192.168.39.206
	I0120 16:42:47.666760 2195552 certs.go:194] generating shared ca certs ...
	I0120 16:42:47.666784 2195552 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.666988 2195552 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:42:47.667058 2195552 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:42:47.667071 2195552 certs.go:256] generating profile certs ...
	I0120 16:42:47.667161 2195552 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key
	I0120 16:42:47.667181 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt with IP's: []
	I0120 16:42:47.957732 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt ...
	I0120 16:42:47.957764 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt: {Name:mk2f64b37e464c896144cdc44cfc1fc4f548c045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.957936 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key ...
	I0120 16:42:47.957947 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key: {Name:mk1b16a48ea06faf15a739043d6a562a12842ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.958021 2195552 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76
	I0120 16:42:47.958037 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0120 16:42:48.237739 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 ...
	I0120 16:42:48.237772 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76: {Name:mk2d82f1b438734a66d4bca5d26768f17a50dbb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.237934 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 ...
	I0120 16:42:48.237945 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76: {Name:mk5552939933befe1ef0d3a7fff6d21fdf398d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.238016 2195552 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt
	I0120 16:42:48.238119 2195552 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key
	I0120 16:42:48.238183 2195552 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key
	I0120 16:42:48.238205 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt with IP's: []
	I0120 16:42:48.328536 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt ...
	I0120 16:42:48.328597 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt: {Name:mk71903f0dc1f4b5602bf3f87a72991a3294fe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328771 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key ...
	I0120 16:42:48.328786 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key: {Name:mkb6cb1df1b5d7b66259c1ec746be1ba174817a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328986 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:42:48.329026 2195552 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:42:48.329038 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:42:48.329061 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:42:48.329085 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:42:48.329113 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:42:48.329155 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:48.329806 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:42:48.377022 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:42:48.423232 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:42:48.452106 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:42:48.484435 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:42:48.514707 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:42:48.541159 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:42:48.642490 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:42:48.668101 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:42:48.696379 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:42:48.722994 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:42:48.748145 2195552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:42:48.766358 2195552 ssh_runner.go:195] Run: openssl version
	I0120 16:42:48.773160 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:42:48.785416 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791084 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791158 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.797932 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:42:48.811525 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:42:48.826046 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832200 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832280 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.838879 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:42:48.851808 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:42:48.865253 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870647 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870724 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.877010 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:42:48.889902 2195552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:42:48.894559 2195552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:42:48.894640 2195552 kubeadm.go:392] StartCluster: {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:42:48.894779 2195552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:42:48.894890 2195552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:42:48.940887 2195552 cri.go:89] found id: ""
	I0120 16:42:48.940984 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:42:48.952531 2195552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:42:48.963786 2195552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:42:48.974250 2195552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:42:48.974278 2195552 kubeadm.go:157] found existing configuration files:
	
	I0120 16:42:48.974338 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:42:48.984449 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:42:48.984527 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:42:48.995330 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:42:49.006034 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:42:49.006104 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:42:49.017110 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.027295 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:42:49.027368 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.040812 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:42:49.051290 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:42:49.051377 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:42:49.066485 2195552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:42:49.134741 2195552 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:42:49.134946 2195552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:42:49.249160 2195552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:42:49.249323 2195552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:42:49.249481 2195552 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:42:49.263796 2195552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:42:47.358916 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:47.359566 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:47.359596 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:47.359554 2197289 retry.go:31] will retry after 1.461778861s: waiting for domain to come up
	I0120 16:42:48.823504 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:48.824115 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:48.824147 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:48.824084 2197289 retry.go:31] will retry after 1.249679155s: waiting for domain to come up
	I0120 16:42:50.075499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:50.076082 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:50.076116 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:50.076030 2197289 retry.go:31] will retry after 2.28026185s: waiting for domain to come up
	I0120 16:42:49.298061 2195552 out.go:235]   - Generating certificates and keys ...
	I0120 16:42:49.298271 2195552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:42:49.298360 2195552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:42:49.326405 2195552 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:42:49.603739 2195552 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:42:50.017706 2195552 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:42:50.212861 2195552 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:42:50.332005 2195552 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:42:50.332365 2195552 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.576915 2195552 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:42:50.577225 2195552 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.922540 2195552 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:42:51.148072 2195552 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:42:51.262833 2195552 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:42:51.262930 2195552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:42:51.404906 2195552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:42:51.648067 2195552 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:42:51.759756 2195552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:42:51.962741 2195552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:42:52.453700 2195552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:42:52.456041 2195552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:42:52.459366 2195552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:42:52.461278 2195552 out.go:235]   - Booting up control plane ...
	I0120 16:42:52.461391 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:42:52.461507 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:42:52.461588 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:42:52.484769 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:42:52.493367 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:42:52.493452 2195552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:42:52.663075 2195552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:42:52.664096 2195552 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:42:52.357734 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:52.358411 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:52.358493 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:52.358391 2197289 retry.go:31] will retry after 2.232137635s: waiting for domain to come up
	I0120 16:42:54.592598 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:54.593256 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:54.593288 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:54.593159 2197289 retry.go:31] will retry after 3.499879042s: waiting for domain to come up
	I0120 16:42:54.164599 2195552 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501261507s
	I0120 16:42:54.164721 2195552 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:42:59.162803 2195552 kubeadm.go:310] [api-check] The API server is healthy after 5.001059076s
	I0120 16:42:59.182087 2195552 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:42:59.202928 2195552 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:42:59.251598 2195552 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:42:59.251870 2195552 kubeadm.go:310] [mark-control-plane] Marking the node flannel-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:42:59.267327 2195552 kubeadm.go:310] [bootstrap-token] Using token: 0uevl5.w9rl7hild7q3qmvj
	I0120 16:42:59.268924 2195552 out.go:235]   - Configuring RBAC rules ...
	I0120 16:42:59.269076 2195552 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:42:59.276545 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:42:59.290974 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:42:59.296882 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:42:59.304061 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:42:59.311324 2195552 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:42:59.571703 2195552 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:42:59.999391 2195552 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:00.569884 2195552 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:00.572667 2195552 kubeadm.go:310] 
	I0120 16:43:00.572758 2195552 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:00.572768 2195552 kubeadm.go:310] 
	I0120 16:43:00.572931 2195552 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:00.572966 2195552 kubeadm.go:310] 
	I0120 16:43:00.573016 2195552 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:00.573090 2195552 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:00.573154 2195552 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:00.573163 2195552 kubeadm.go:310] 
	I0120 16:43:00.573251 2195552 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:00.573265 2195552 kubeadm.go:310] 
	I0120 16:43:00.573345 2195552 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:00.573378 2195552 kubeadm.go:310] 
	I0120 16:43:00.573475 2195552 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:00.573604 2195552 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:00.573697 2195552 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:00.573707 2195552 kubeadm.go:310] 
	I0120 16:43:00.573823 2195552 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:00.573923 2195552 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:00.573930 2195552 kubeadm.go:310] 
	I0120 16:43:00.574048 2195552 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574201 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:00.574235 2195552 kubeadm.go:310] 	--control-plane 
	I0120 16:43:00.574258 2195552 kubeadm.go:310] 
	I0120 16:43:00.574400 2195552 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:00.574432 2195552 kubeadm.go:310] 
	I0120 16:43:00.574590 2195552 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574795 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:00.575007 2195552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:00.575049 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:43:00.576721 2195552 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0120 16:42:58.094988 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:58.095844 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:58.095874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:58.095719 2197289 retry.go:31] will retry after 4.384762232s: waiting for domain to come up
	I0120 16:43:00.577996 2195552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 16:43:00.584504 2195552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 16:43:00.584526 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0120 16:43:00.610147 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 16:43:01.108354 2195552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:01.108472 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.108474 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=flannel-708138 minikube.k8s.io/primary=true
	I0120 16:43:01.153107 2195552 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:01.323188 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.823589 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.324096 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.823844 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.323872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.823872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.323604 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.428740 2195552 kubeadm.go:1113] duration metric: took 3.320348756s to wait for elevateKubeSystemPrivileges
	I0120 16:43:04.428788 2195552 kubeadm.go:394] duration metric: took 15.534153444s to StartCluster
	I0120 16:43:04.428816 2195552 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.428921 2195552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:04.430989 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.431307 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:04.431303 2195552 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:04.431336 2195552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:04.431519 2195552 addons.go:69] Setting storage-provisioner=true in profile "flannel-708138"
	I0120 16:43:04.431529 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:04.431538 2195552 addons.go:238] Setting addon storage-provisioner=true in "flannel-708138"
	I0120 16:43:04.431579 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.431586 2195552 addons.go:69] Setting default-storageclass=true in profile "flannel-708138"
	I0120 16:43:04.431621 2195552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-708138"
	I0120 16:43:04.432070 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432112 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.432118 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432151 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.435123 2195552 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:04.436595 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:04.449431 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0120 16:43:04.449469 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0120 16:43:04.450031 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450065 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450628 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450657 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.450772 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450798 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.451074 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451199 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.451674 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.451723 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.455136 2195552 addons.go:238] Setting addon default-storageclass=true in "flannel-708138"
	I0120 16:43:04.455176 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.455442 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.455480 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.468668 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0120 16:43:04.469232 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.469794 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.469810 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.470234 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.470456 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.471939 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I0120 16:43:04.472364 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.472464 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.472904 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.472933 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.473322 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.473822 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.473860 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.474444 2195552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:04.475956 2195552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.475976 2195552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:04.475998 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.479414 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.479895 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.479928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.480056 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.480246 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.480426 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.480560 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.491228 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0120 16:43:04.491682 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.492333 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.492364 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.492740 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.492924 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.494696 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.494958 2195552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:04.494975 2195552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:04.494997 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.497642 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498099 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.498131 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498258 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.498486 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.498649 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.498811 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.741102 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:04.741114 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:04.889912 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.966678 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:05.319499 2195552 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:05.321208 2195552 node_ready.go:35] waiting up to 15m0s for node "flannel-708138" to be "Ready" ...
	I0120 16:43:05.578109 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578136 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578257 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578282 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578512 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.578539 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.578550 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578558 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580280 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580297 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580296 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580313 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580323 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580333 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.580340 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580334 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580582 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580586 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580600 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.591009 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.591045 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.591353 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.591368 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.591377 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.593936 2195552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:02.482109 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:02.482647 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:43:02.482679 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:43:02.482582 2197289 retry.go:31] will retry after 5.49113903s: waiting for domain to come up
	I0120 16:43:05.595175 2195552 addons.go:514] duration metric: took 1.163842267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:05.824160 2195552 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-708138" context rescaled to 1 replicas
	I0120 16:43:07.325793 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:07.975570 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976154 2197206 main.go:141] libmachine: (bridge-708138) found domain IP: 192.168.72.88
	I0120 16:43:07.976182 2197206 main.go:141] libmachine: (bridge-708138) reserving static IP address...
	I0120 16:43:07.976192 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has current primary IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976560 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find host DHCP lease matching {name: "bridge-708138", mac: "52:54:00:d9:89:1c", ip: "192.168.72.88"} in network mk-bridge-708138
	I0120 16:43:08.062745 2197206 main.go:141] libmachine: (bridge-708138) reserved static IP address 192.168.72.88 for domain bridge-708138
	I0120 16:43:08.062784 2197206 main.go:141] libmachine: (bridge-708138) DBG | Getting to WaitForSSH function...
	I0120 16:43:08.062792 2197206 main.go:141] libmachine: (bridge-708138) waiting for SSH...
	I0120 16:43:08.065921 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066430 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.066483 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066582 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH client type: external
	I0120 16:43:08.066651 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa (-rw-------)
	I0120 16:43:08.066681 2197206 main.go:141] libmachine: (bridge-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:43:08.066697 2197206 main.go:141] libmachine: (bridge-708138) DBG | About to run SSH command:
	I0120 16:43:08.066706 2197206 main.go:141] libmachine: (bridge-708138) DBG | exit 0
	I0120 16:43:08.195445 2197206 main.go:141] libmachine: (bridge-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:43:08.195759 2197206 main.go:141] libmachine: (bridge-708138) KVM machine creation complete
	I0120 16:43:08.196070 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:08.196739 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197017 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197188 2197206 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:43:08.197231 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:08.198995 2197206 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:43:08.199011 2197206 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:43:08.199017 2197206 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:43:08.199022 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.201755 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202123 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.202152 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202261 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.202473 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202647 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202790 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.202975 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.203249 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.203266 2197206 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:43:08.310341 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.310368 2197206 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:43:08.310376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.313249 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313593 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.313617 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313753 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.313976 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314162 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314330 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.314548 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.314788 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.314803 2197206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:43:08.424018 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:43:08.424146 2197206 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:43:08.424160 2197206 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:43:08.424174 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424466 2197206 buildroot.go:166] provisioning hostname "bridge-708138"
	I0120 16:43:08.424517 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424725 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.427305 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427686 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.427715 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427863 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.428207 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428411 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428534 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.428719 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.428965 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.428985 2197206 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-708138 && echo "bridge-708138" | sudo tee /etc/hostname
	I0120 16:43:08.551195 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-708138
	
	I0120 16:43:08.551238 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.554014 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554390 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.554423 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554574 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.554806 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.554968 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.555124 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.555257 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.555452 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.555467 2197206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:43:08.673244 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.673286 2197206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:43:08.673324 2197206 buildroot.go:174] setting up certificates
	I0120 16:43:08.673340 2197206 provision.go:84] configureAuth start
	I0120 16:43:08.673357 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.673699 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:08.676632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.676968 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.677000 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.677175 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.679290 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679603 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.679632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679786 2197206 provision.go:143] copyHostCerts
	I0120 16:43:08.679847 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:43:08.679859 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:43:08.679915 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:43:08.680004 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:43:08.680019 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:43:08.680038 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:43:08.680087 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:43:08.680094 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:43:08.680113 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:43:08.680159 2197206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.bridge-708138 san=[127.0.0.1 192.168.72.88 bridge-708138 localhost minikube]
	I0120 16:43:08.795436 2197206 provision.go:177] copyRemoteCerts
	I0120 16:43:08.795532 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:43:08.795567 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.798390 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798751 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.798784 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798951 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.799157 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.799316 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.799470 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:08.890925 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:43:08.918903 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 16:43:08.946784 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:43:08.972830 2197206 provision.go:87] duration metric: took 299.472419ms to configureAuth
	I0120 16:43:08.972860 2197206 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:43:08.973105 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:08.973209 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.976107 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976516 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.976547 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976758 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.977001 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977195 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977372 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.977552 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.977793 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.977818 2197206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:43:09.218079 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:43:09.218113 2197206 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:43:09.218121 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetURL
	I0120 16:43:09.219440 2197206 main.go:141] libmachine: (bridge-708138) DBG | using libvirt version 6000000
	I0120 16:43:09.221519 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.221903 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.221936 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.222152 2197206 main.go:141] libmachine: Docker is up and running!
	I0120 16:43:09.222170 2197206 main.go:141] libmachine: Reticulating splines...
	I0120 16:43:09.222180 2197206 client.go:171] duration metric: took 27.720355771s to LocalClient.Create
	I0120 16:43:09.222209 2197206 start.go:167] duration metric: took 27.720430833s to libmachine.API.Create "bridge-708138"
	I0120 16:43:09.222223 2197206 start.go:293] postStartSetup for "bridge-708138" (driver="kvm2")
	I0120 16:43:09.222236 2197206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:43:09.222269 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.222508 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:43:09.222546 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.224660 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.224997 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.225028 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.225135 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.225326 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.225514 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.225714 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.311781 2197206 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:43:09.316438 2197206 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:43:09.316477 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:43:09.316558 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:43:09.316649 2197206 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:43:09.316749 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:43:09.329422 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:09.358995 2197206 start.go:296] duration metric: took 136.756187ms for postStartSetup
	I0120 16:43:09.359076 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:09.359720 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.362855 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363228 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.363298 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363532 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:43:09.363729 2197206 start.go:128] duration metric: took 27.883644045s to createHost
	I0120 16:43:09.363752 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.367222 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367703 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.367728 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367889 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.368112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368248 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.368536 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:09.368750 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:09.368769 2197206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:43:09.476152 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391389.460433936
	
	I0120 16:43:09.476186 2197206 fix.go:216] guest clock: 1737391389.460433936
	I0120 16:43:09.476208 2197206 fix.go:229] Guest: 2025-01-20 16:43:09.460433936 +0000 UTC Remote: 2025-01-20 16:43:09.363740668 +0000 UTC m=+37.396826539 (delta=96.693268ms)
	I0120 16:43:09.476239 2197206 fix.go:200] guest clock delta is within tolerance: 96.693268ms
	I0120 16:43:09.476250 2197206 start.go:83] releasing machines lock for "bridge-708138", held for 27.996351856s
	I0120 16:43:09.476280 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.476552 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.479629 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480100 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.480130 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480293 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480785 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480979 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.481115 2197206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:43:09.481163 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.481228 2197206 ssh_runner.go:195] Run: cat /version.json
	I0120 16:43:09.481255 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.484029 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484438 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.484465 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484487 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484809 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.484960 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.485013 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485036 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.485249 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.485266 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485476 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485524 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.485634 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485801 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.572916 2197206 ssh_runner.go:195] Run: systemctl --version
	I0120 16:43:09.609198 2197206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:43:09.772783 2197206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:43:09.779241 2197206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:43:09.779347 2197206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:43:09.796029 2197206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:43:09.796066 2197206 start.go:495] detecting cgroup driver to use...
	I0120 16:43:09.796162 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:43:09.813742 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:43:09.828707 2197206 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:43:09.828775 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:43:09.843309 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:43:09.858188 2197206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:43:09.984031 2197206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:43:10.146631 2197206 docker.go:233] disabling docker service ...
	I0120 16:43:10.146719 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:43:10.162952 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:43:10.176639 2197206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:43:10.313460 2197206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:43:10.449221 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:43:10.464620 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:43:10.484192 2197206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:43:10.484261 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.496517 2197206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:43:10.496623 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.508222 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.519634 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.531216 2197206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:43:10.543258 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.557639 2197206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.580753 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.592908 2197206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:43:10.604469 2197206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:43:10.604557 2197206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:43:10.619774 2197206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:43:10.630917 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:10.771445 2197206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:43:10.858491 2197206 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:43:10.858594 2197206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:43:10.863619 2197206 start.go:563] Will wait 60s for crictl version
	I0120 16:43:10.863674 2197206 ssh_runner.go:195] Run: which crictl
	I0120 16:43:10.867761 2197206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:43:10.910094 2197206 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:43:10.910202 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.946319 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.984785 2197206 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:43:10.986112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:10.989054 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989473 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:10.989499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989835 2197206 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:43:10.994705 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:11.009975 2197206 kubeadm.go:883] updating cluster {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:43:11.010149 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:43:11.010226 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:11.045673 2197206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:43:11.045764 2197206 ssh_runner.go:195] Run: which lz4
	I0120 16:43:11.050364 2197206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:43:11.054940 2197206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:43:11.054978 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:43:09.824714 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:11.826450 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:12.645258 2197206 crio.go:462] duration metric: took 1.594939639s to copy over tarball
	I0120 16:43:12.645365 2197206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:43:15.071062 2197206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425659919s)
	I0120 16:43:15.071103 2197206 crio.go:469] duration metric: took 2.425799615s to extract the tarball
	I0120 16:43:15.071114 2197206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:43:15.111615 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:15.156900 2197206 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:43:15.156926 2197206 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:43:15.156936 2197206 kubeadm.go:934] updating node { 192.168.72.88 8443 v1.32.0 crio true true} ...
	I0120 16:43:15.157067 2197206 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0120 16:43:15.157162 2197206 ssh_runner.go:195] Run: crio config
	I0120 16:43:15.208647 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:15.208676 2197206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:43:15.208699 2197206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.88 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-708138 NodeName:bridge-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:43:15.208830 2197206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.88"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.88"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:43:15.208898 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:43:15.220035 2197206 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:43:15.220130 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:43:15.230274 2197206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:43:15.250389 2197206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:43:15.268846 2197206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0120 16:43:15.288060 2197206 ssh_runner.go:195] Run: grep 192.168.72.88	control-plane.minikube.internal$ /etc/hosts
	I0120 16:43:15.293094 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:15.307503 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:15.448214 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:15.471118 2197206 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138 for IP: 192.168.72.88
	I0120 16:43:15.471147 2197206 certs.go:194] generating shared ca certs ...
	I0120 16:43:15.471165 2197206 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.471331 2197206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:43:15.471386 2197206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:43:15.471396 2197206 certs.go:256] generating profile certs ...
	I0120 16:43:15.471452 2197206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key
	I0120 16:43:15.471479 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt with IP's: []
	I0120 16:43:15.891023 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt ...
	I0120 16:43:15.891061 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: {Name:mk81b32ec31af688b6d4652fb2789449b6bb041c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891285 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key ...
	I0120 16:43:15.891309 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key: {Name:mk3bbf7430f7b04957959e169acea17d8973d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891454 2197206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5
	I0120 16:43:15.891482 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.88]
	I0120 16:43:16.021148 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 ...
	I0120 16:43:16.021182 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5: {Name:mk56a312fc5ec12eb4e10626dc4fa18ded44019d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021396 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 ...
	I0120 16:43:16.021416 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5: {Name:mk71d4978edbd5634298d6328a82e57dfdcb21df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021521 2197206 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt
	I0120 16:43:16.021621 2197206 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key
	I0120 16:43:16.021684 2197206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key
	I0120 16:43:16.021701 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt with IP's: []
	I0120 16:43:16.200719 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt ...
	I0120 16:43:16.200752 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt: {Name:mk1b93fabdfdbe923ba4bd4bdcee8aa4ee4eb6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.200944 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key ...
	I0120 16:43:16.200964 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key: {Name:mk47f0abf782077fe358b23835f1924f393006e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.201182 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:43:16.201225 2197206 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:43:16.201236 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:43:16.201260 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:43:16.201283 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:43:16.201303 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:43:16.201340 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:16.201918 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:43:16.237391 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:43:16.277743 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:43:16.306735 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:43:16.334792 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:43:16.363266 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:43:16.391982 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:43:16.419674 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:43:16.446802 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:43:16.474961 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:43:16.503997 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:43:16.530572 2197206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:43:16.548971 2197206 ssh_runner.go:195] Run: openssl version
	I0120 16:43:16.555413 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:43:16.567053 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571897 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571974 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.578136 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:43:16.590223 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:43:16.602984 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.607971 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.608083 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.614296 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:43:16.626015 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:43:16.639800 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645006 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645084 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.651449 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:43:16.663469 2197206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:43:16.668102 2197206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:43:16.668167 2197206 kubeadm.go:392] StartCluster: {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:43:16.668285 2197206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:43:16.668340 2197206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:43:16.706702 2197206 cri.go:89] found id: ""
	I0120 16:43:16.706804 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:43:16.718586 2197206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:43:16.729343 2197206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:43:16.740887 2197206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:43:16.740911 2197206 kubeadm.go:157] found existing configuration files:
	
	I0120 16:43:16.740975 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:43:16.753083 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:43:16.753151 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:43:16.764580 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:43:16.776660 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:43:16.776739 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:43:16.787809 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.800110 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:43:16.800203 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.811124 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:43:16.822087 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:43:16.822160 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:43:16.834957 2197206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:43:16.902421 2197206 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:43:16.902553 2197206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:43:17.042455 2197206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:43:17.042629 2197206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:43:17.042798 2197206 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:43:17.053323 2197206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:43:14.324786 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:16.325269 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:18.393718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:17.321797 2197206 out.go:235]   - Generating certificates and keys ...
	I0120 16:43:17.321934 2197206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:43:17.322011 2197206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:43:17.402336 2197206 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:43:17.536347 2197206 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:43:17.688442 2197206 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:43:17.858918 2197206 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:43:18.183422 2197206 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:43:18.183672 2197206 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.264748 2197206 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:43:18.264953 2197206 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.426217 2197206 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:43:18.686494 2197206 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:43:18.828457 2197206 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:43:18.828691 2197206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:43:18.955301 2197206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:43:19.046031 2197206 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:43:19.231335 2197206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:43:19.447816 2197206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:43:19.619053 2197206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:43:19.619607 2197206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:43:19.622288 2197206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:43:19.624157 2197206 out.go:235]   - Booting up control plane ...
	I0120 16:43:19.624275 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:43:19.624380 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:43:19.624476 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:43:19.646471 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:43:19.657842 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:43:19.657931 2197206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:43:19.804616 2197206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:43:19.804743 2197206 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:43:20.315932 2197206 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.124273ms
	I0120 16:43:20.316084 2197206 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:43:20.825198 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:23.325444 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:25.818525 2197206 kubeadm.go:310] [api-check] The API server is healthy after 5.503297043s
	I0120 16:43:25.835132 2197206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:43:25.869802 2197206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:43:25.925988 2197206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:43:25.926216 2197206 kubeadm.go:310] [mark-control-plane] Marking the node bridge-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:43:25.952439 2197206 kubeadm.go:310] [bootstrap-token] Using token: xw20yr.9359ar4c28065art
	I0120 16:43:25.954040 2197206 out.go:235]   - Configuring RBAC rules ...
	I0120 16:43:25.954189 2197206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:43:25.971234 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:43:25.984672 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:43:25.992321 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:43:25.998352 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:43:26.005011 2197206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:43:26.224365 2197206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:43:26.676446 2197206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:27.225715 2197206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:27.229867 2197206 kubeadm.go:310] 
	I0120 16:43:27.229970 2197206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:27.229988 2197206 kubeadm.go:310] 
	I0120 16:43:27.230128 2197206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:27.230149 2197206 kubeadm.go:310] 
	I0120 16:43:27.230187 2197206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:27.230280 2197206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:27.230366 2197206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:27.230377 2197206 kubeadm.go:310] 
	I0120 16:43:27.230453 2197206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:27.230469 2197206 kubeadm.go:310] 
	I0120 16:43:27.230530 2197206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:27.230540 2197206 kubeadm.go:310] 
	I0120 16:43:27.230633 2197206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:27.230741 2197206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:27.230840 2197206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:27.230850 2197206 kubeadm.go:310] 
	I0120 16:43:27.230947 2197206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:27.231060 2197206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:27.231069 2197206 kubeadm.go:310] 
	I0120 16:43:27.231168 2197206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231293 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:27.231325 2197206 kubeadm.go:310] 	--control-plane 
	I0120 16:43:27.231336 2197206 kubeadm.go:310] 
	I0120 16:43:27.231463 2197206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:27.231479 2197206 kubeadm.go:310] 
	I0120 16:43:27.231554 2197206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231702 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:27.232406 2197206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:27.232502 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:27.235020 2197206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:43:25.325819 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.325884 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.236381 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:43:27.251582 2197206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:43:27.277986 2197206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:27.278066 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.278083 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=bridge-708138 minikube.k8s.io/primary=true
	I0120 16:43:27.318132 2197206 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:27.454138 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.955129 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.454750 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.954684 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.454513 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.955223 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.455022 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.954199 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.454428 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.606545 2197206 kubeadm.go:1113] duration metric: took 4.328571416s to wait for elevateKubeSystemPrivileges
	I0120 16:43:31.606592 2197206 kubeadm.go:394] duration metric: took 14.938431891s to StartCluster
	I0120 16:43:31.606633 2197206 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.606774 2197206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:31.609525 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.609884 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:31.609885 2197206 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:31.609984 2197206 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:31.610121 2197206 addons.go:69] Setting storage-provisioner=true in profile "bridge-708138"
	I0120 16:43:31.610144 2197206 addons.go:238] Setting addon storage-provisioner=true in "bridge-708138"
	I0120 16:43:31.610141 2197206 addons.go:69] Setting default-storageclass=true in profile "bridge-708138"
	I0120 16:43:31.610154 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:31.610166 2197206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-708138"
	I0120 16:43:31.610193 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.610720 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610774 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.610788 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610837 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.611842 2197206 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:31.613454 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:31.628647 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I0120 16:43:31.628881 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0120 16:43:31.629232 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629383 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629930 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.629952 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630016 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.630040 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630423 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.630687 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.630689 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.631256 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.631304 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.634974 2197206 addons.go:238] Setting addon default-storageclass=true in "bridge-708138"
	I0120 16:43:31.635030 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.635335 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.635387 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.649021 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0120 16:43:31.649452 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.651254 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.651285 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.651867 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.652059 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.653726 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0120 16:43:31.654126 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.654296 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.654915 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.654928 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.655380 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.655949 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.656008 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.656646 2197206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:31.658066 2197206 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:31.658082 2197206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:31.658099 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.661450 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.661729 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.661760 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.662030 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.662235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.662397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.662550 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.676457 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0120 16:43:31.677019 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.677756 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.677789 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.678148 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.678385 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.680320 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.680609 2197206 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:31.680630 2197206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:31.680655 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.683331 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.683716 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.683795 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.684017 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.684235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.684397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.684535 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.936634 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:31.936728 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:31.976057 2197206 node_ready.go:35] waiting up to 15m0s for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985329 2197206 node_ready.go:49] node "bridge-708138" has status "Ready":"True"
	I0120 16:43:31.985356 2197206 node_ready.go:38] duration metric: took 9.257739ms for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985368 2197206 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:31.995641 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:32.055183 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:32.153090 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:32.568616 2197206 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:32.853746 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853781 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.853900 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853924 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854124 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854175 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854180 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854222 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.854226 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854268 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854280 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854197 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854356 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854138 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856214 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856226 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856289 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.856306 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856355 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856368 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.874144 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.874173 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.874543 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.874584 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.874595 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.876336 2197206 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:29.825256 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:31.826538 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:32.877697 2197206 addons.go:514] duration metric: took 1.267734381s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:33.076155 2197206 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-708138" context rescaled to 1 replicas
	I0120 16:43:33.998522 2197206 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998557 2197206 pod_ready.go:82] duration metric: took 2.002870414s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	E0120 16:43:33.998571 2197206 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998581 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:36.006241 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:34.324997 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:36.326016 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.825101 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.504747 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:40.005785 2197206 pod_ready.go:93] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.005813 2197206 pod_ready.go:82] duration metric: took 6.007222936s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.005823 2197206 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011217 2197206 pod_ready.go:93] pod "etcd-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.011239 2197206 pod_ready.go:82] duration metric: took 5.409716ms for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011248 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016613 2197206 pod_ready.go:93] pod "kube-apiserver-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.016634 2197206 pod_ready.go:82] duration metric: took 5.379045ms for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016643 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021777 2197206 pod_ready.go:93] pod "kube-controller-manager-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.021806 2197206 pod_ready.go:82] duration metric: took 5.155108ms for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021818 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028255 2197206 pod_ready.go:93] pod "kube-proxy-gz7x6" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.028280 2197206 pod_ready.go:82] duration metric: took 6.454274ms for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028289 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403358 2197206 pod_ready.go:93] pod "kube-scheduler-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.403389 2197206 pod_ready.go:82] duration metric: took 375.092058ms for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403398 2197206 pod_ready.go:39] duration metric: took 8.418019424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:40.403415 2197206 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:43:40.403470 2197206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:43:40.420906 2197206 api_server.go:72] duration metric: took 8.810975265s to wait for apiserver process to appear ...
	I0120 16:43:40.420936 2197206 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:43:40.420959 2197206 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0120 16:43:40.427501 2197206 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0120 16:43:40.428593 2197206 api_server.go:141] control plane version: v1.32.0
	I0120 16:43:40.428625 2197206 api_server.go:131] duration metric: took 7.680154ms to wait for apiserver health ...
	I0120 16:43:40.428636 2197206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:43:40.607673 2197206 system_pods.go:59] 7 kube-system pods found
	I0120 16:43:40.607711 2197206 system_pods.go:61] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:40.607716 2197206 system_pods.go:61] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:40.607719 2197206 system_pods.go:61] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:40.607723 2197206 system_pods.go:61] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:40.607727 2197206 system_pods.go:61] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:40.607730 2197206 system_pods.go:61] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:40.607733 2197206 system_pods.go:61] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:40.607740 2197206 system_pods.go:74] duration metric: took 179.093225ms to wait for pod list to return data ...
	I0120 16:43:40.607747 2197206 default_sa.go:34] waiting for default service account to be created ...
	I0120 16:43:40.803775 2197206 default_sa.go:45] found service account: "default"
	I0120 16:43:40.803805 2197206 default_sa.go:55] duration metric: took 196.051704ms for default service account to be created ...
	I0120 16:43:40.803813 2197206 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 16:43:41.006405 2197206 system_pods.go:87] 7 kube-system pods found
	I0120 16:43:41.203196 2197206 system_pods.go:105] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:41.203220 2197206 system_pods.go:105] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:41.203225 2197206 system_pods.go:105] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:41.203230 2197206 system_pods.go:105] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:41.203234 2197206 system_pods.go:105] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:41.203238 2197206 system_pods.go:105] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:41.203243 2197206 system_pods.go:105] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:41.203251 2197206 system_pods.go:147] duration metric: took 399.431194ms to wait for k8s-apps to be running ...
	I0120 16:43:41.203259 2197206 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 16:43:41.203319 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:43:41.218649 2197206 system_svc.go:56] duration metric: took 15.377778ms WaitForService to wait for kubelet
	I0120 16:43:41.218683 2197206 kubeadm.go:582] duration metric: took 9.608759794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:43:41.218707 2197206 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:43:41.404150 2197206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:43:41.404181 2197206 node_conditions.go:123] node cpu capacity is 2
	I0120 16:43:41.404194 2197206 node_conditions.go:105] duration metric: took 185.483174ms to run NodePressure ...
	I0120 16:43:41.404207 2197206 start.go:241] waiting for startup goroutines ...
	I0120 16:43:41.404213 2197206 start.go:246] waiting for cluster config update ...
	I0120 16:43:41.404225 2197206 start.go:255] writing updated cluster config ...
	I0120 16:43:41.404496 2197206 ssh_runner.go:195] Run: rm -f paused
	I0120 16:43:41.457290 2197206 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 16:43:41.459151 2197206 out.go:177] * Done! kubectl is now configured to use "bridge-708138" cluster and "default" namespace by default
	I0120 16:43:40.825164 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:43.325186 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:45.825830 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:48.325148 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:50.325324 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:52.825144 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:54.825386 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:57.325511 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:59.825432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:01.826019 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:04.324951 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:06.327813 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:08.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:10.825618 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:13.325998 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:15.824909 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:18.325253 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:20.325659 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:22.825615 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:25.324569 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:27.324668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:29.325114 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:31.824591 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:33.825417 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:36.325425 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:38.326595 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:40.825370 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:43.325332 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:45.825470 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:48.325279 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:50.825752 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:53.326233 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:55.327674 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:57.824868 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:59.825796 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:02.325316 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:04.325859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:06.825325 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:09.325718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:11.825001 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:14.324938 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:16.325124 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:18.325501 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:20.825364 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:22.827208 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:25.325469 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:27.825982 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:30.325432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:32.325551 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:34.825047 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:36.825526 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:39.325753 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:41.825898 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:44.325151 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:46.325219 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:48.325661 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:50.826115 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:53.325524 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:55.825672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:57.825995 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:00.325672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:02.824695 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:04.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:07.325274 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:09.325798 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:11.824561 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:13.825167 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:15.825328 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:18.324814 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:20.824710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:22.825668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:25.325111 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:27.824859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:29.825200 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:32.328676 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:34.825710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:36.826122 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:39.324220 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:41.324710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:43.325287 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:45.325431 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:47.824648 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:49.825286 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:51.825539 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:53.825772 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:55.826486 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:58.324721 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:00.325134 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:02.825138 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324759 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324796 2195552 node_ready.go:38] duration metric: took 4m0.003559137s for node "flannel-708138" to be "Ready" ...
	I0120 16:47:05.327110 2195552 out.go:201] 
	W0120 16:47:05.328484 2195552 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0120 16:47:05.328509 2195552 out.go:270] * 
	W0120 16:47:05.329391 2195552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:47:05.331128 2195552 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.768261868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391779768233505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c820a512-104f-4640-8e34-8459807a0822 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.768855564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75cc4f54-293c-4990-806d-3bc2753729c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.768920991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75cc4f54-293c-4990-806d-3bc2753729c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.768957772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=75cc4f54-293c-4990-806d-3bc2753729c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.800895564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf9a0718-4661-4654-bd68-88cee50da98c name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.800990428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf9a0718-4661-4654-bd68-88cee50da98c name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.803545729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42687ffb-4ef9-4a17-80b1-47a9bf5a4178 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.804020318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391779803998912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42687ffb-4ef9-4a17-80b1-47a9bf5a4178 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.804856944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1879319b-a203-4e02-94c0-cfa95af3631a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.804917231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1879319b-a203-4e02-94c0-cfa95af3631a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.804948673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1879319b-a203-4e02-94c0-cfa95af3631a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.836906361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68a408d2-0fc7-4f3b-8982-66ad2fc7435b name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.836993672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68a408d2-0fc7-4f3b-8982-66ad2fc7435b name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.838286571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=074bbee4-6238-4c0c-a33d-5ff156d6aa8c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.838680452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391779838659362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=074bbee4-6238-4c0c-a33d-5ff156d6aa8c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.839433220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=633eb229-c45d-48ae-a4d0-22ee4f56283f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.839488372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=633eb229-c45d-48ae-a4d0-22ee4f56283f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.839523808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=633eb229-c45d-48ae-a4d0-22ee4f56283f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.872886798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc45499d-7b89-4710-b5d0-322a0b1b36d8 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.872960300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc45499d-7b89-4710-b5d0-322a0b1b36d8 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.873906323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7615d411-450d-4bc0-ade2-6aa45a803a48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.874266411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737391779874244919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7615d411-450d-4bc0-ade2-6aa45a803a48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.874929773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b07a9743-a789-45fb-a9c6-63d5c0535fdb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.874988379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b07a9743-a789-45fb-a9c6-63d5c0535fdb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:49:39 old-k8s-version-806597 crio[634]: time="2025-01-20 16:49:39.875024217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b07a9743-a789-45fb-a9c6-63d5c0535fdb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 16:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055270] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137706] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.025165] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.690534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.917675] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.063794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072347] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.228149] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.140637] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.243673] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.859627] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059674] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.198086] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.593379] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 16:36] systemd-fstab-generator[5007]: Ignoring "noauto" option for root device
	[Jan20 16:38] systemd-fstab-generator[5282]: Ignoring "noauto" option for root device
	[  +0.065823] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:49:40 up 17 min,  0 users,  load average: 0.08, 0.08, 0.08
	Linux old-k8s-version-806597 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: goroutine 144 [select]:
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000907f58, 0x4f0ac20, 0xc000205540, 0x1, 0xc00009e0c0)
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001e6620, 0xc00009e0c0)
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: goroutine 125 [select]:
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000217d60, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000028d20, 0x0, 0x0)
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0009ce1c0)
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 20 16:49:35 old-k8s-version-806597 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 20 16:49:36 old-k8s-version-806597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Jan 20 16:49:36 old-k8s-version-806597 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 16:49:36 old-k8s-version-806597 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 16:49:36 old-k8s-version-806597 kubelet[6456]: I0120 16:49:36.309086    6456 server.go:416] Version: v1.20.0
	Jan 20 16:49:36 old-k8s-version-806597 kubelet[6456]: I0120 16:49:36.309493    6456 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 16:49:36 old-k8s-version-806597 kubelet[6456]: I0120 16:49:36.317676    6456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 16:49:36 old-k8s-version-806597 kubelet[6456]: W0120 16:49:36.319626    6456 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 16:49:36 old-k8s-version-806597 kubelet[6456]: I0120 16:49:36.319727    6456 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (232.15086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-806597" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (291.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0120 16:42:14.249699 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p flannel-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: exit status 80 (4m51.396629154s)

                                                
                                                
-- stdout --
	* [flannel-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "flannel-708138" primary control-plane node in "flannel-708138" cluster
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Flannel (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:42:13.989390 2195552 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:42:13.989697 2195552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:13.989709 2195552 out.go:358] Setting ErrFile to fd 2...
	I0120 16:42:13.989714 2195552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:13.989935 2195552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:42:13.990652 2195552 out.go:352] Setting JSON to false
	I0120 16:42:13.991950 2195552 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":30280,"bootTime":1737361054,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:42:13.992076 2195552 start.go:139] virtualization: kvm guest
	I0120 16:42:13.994332 2195552 out.go:177] * [flannel-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:42:13.996429 2195552 notify.go:220] Checking for updates...
	I0120 16:42:13.996457 2195552 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:42:13.998117 2195552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:42:13.999511 2195552 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:42:14.000965 2195552 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:14.002413 2195552 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:42:14.003915 2195552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:42:14.005909 2195552 config.go:182] Loaded profile config "calico-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:14.006041 2195552 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:14.006167 2195552 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:42:14.006300 2195552 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:42:14.049050 2195552 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:42:14.050691 2195552 start.go:297] selected driver: kvm2
	I0120 16:42:14.050747 2195552 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:42:14.050780 2195552 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:42:14.051835 2195552 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:14.051948 2195552 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:42:14.069975 2195552 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:42:14.070061 2195552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:42:14.070322 2195552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:42:14.070356 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:42:14.070362 2195552 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0120 16:42:14.070409 2195552 start.go:340] cluster config:
	{Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0120 16:42:14.070512 2195552 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:14.072745 2195552 out.go:177] * Starting "flannel-708138" primary control-plane node in "flannel-708138" cluster
	I0120 16:42:14.074369 2195552 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:14.074433 2195552 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:42:14.074455 2195552 cache.go:56] Caching tarball of preloaded images
	I0120 16:42:14.074572 2195552 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:42:14.074583 2195552 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:42:14.074739 2195552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json ...
	I0120 16:42:14.074772 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json: {Name:mk89a66bb4ff941d0695d038ac5204f912778e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:14.074922 2195552 start.go:360] acquireMachinesLock for flannel-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:42:14.074955 2195552 start.go:364] duration metric: took 19.892µs to acquireMachinesLock for "flannel-708138"
	I0120 16:42:14.074973 2195552 start.go:93] Provisioning new machine with config: &{Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:42:14.075075 2195552 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:42:14.077085 2195552 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:42:14.077272 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:42:14.077335 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:42:14.094163 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0120 16:42:14.094721 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:42:14.095413 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:42:14.095438 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:42:14.095819 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:42:14.096028 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:14.096165 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:14.096330 2195552 start.go:159] libmachine.API.Create for "flannel-708138" (driver="kvm2")
	I0120 16:42:14.096359 2195552 client.go:168] LocalClient.Create starting
	I0120 16:42:14.096397 2195552 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:42:14.096448 2195552 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:14.096465 2195552 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:14.096537 2195552 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:42:14.096560 2195552 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:14.096582 2195552 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:14.096607 2195552 main.go:141] libmachine: Running pre-create checks...
	I0120 16:42:14.096617 2195552 main.go:141] libmachine: (flannel-708138) Calling .PreCreateCheck
	I0120 16:42:14.096997 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:14.097441 2195552 main.go:141] libmachine: Creating machine...
	I0120 16:42:14.097457 2195552 main.go:141] libmachine: (flannel-708138) Calling .Create
	I0120 16:42:14.097673 2195552 main.go:141] libmachine: (flannel-708138) creating KVM machine...
	I0120 16:42:14.097691 2195552 main.go:141] libmachine: (flannel-708138) creating network...
	I0120 16:42:14.099180 2195552 main.go:141] libmachine: (flannel-708138) DBG | found existing default KVM network
	I0120 16:42:14.100853 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:14.100672 2195575 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015970}
	I0120 16:42:14.100880 2195552 main.go:141] libmachine: (flannel-708138) DBG | created network xml: 
	I0120 16:42:14.100891 2195552 main.go:141] libmachine: (flannel-708138) DBG | <network>
	I0120 16:42:14.100899 2195552 main.go:141] libmachine: (flannel-708138) DBG |   <name>mk-flannel-708138</name>
	I0120 16:42:14.100908 2195552 main.go:141] libmachine: (flannel-708138) DBG |   <dns enable='no'/>
	I0120 16:42:14.100917 2195552 main.go:141] libmachine: (flannel-708138) DBG |   
	I0120 16:42:14.100927 2195552 main.go:141] libmachine: (flannel-708138) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 16:42:14.100937 2195552 main.go:141] libmachine: (flannel-708138) DBG |     <dhcp>
	I0120 16:42:14.100947 2195552 main.go:141] libmachine: (flannel-708138) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 16:42:14.100958 2195552 main.go:141] libmachine: (flannel-708138) DBG |     </dhcp>
	I0120 16:42:14.100966 2195552 main.go:141] libmachine: (flannel-708138) DBG |   </ip>
	I0120 16:42:14.100979 2195552 main.go:141] libmachine: (flannel-708138) DBG |   
	I0120 16:42:14.101020 2195552 main.go:141] libmachine: (flannel-708138) DBG | </network>
	I0120 16:42:14.101043 2195552 main.go:141] libmachine: (flannel-708138) DBG | 
	I0120 16:42:14.106552 2195552 main.go:141] libmachine: (flannel-708138) DBG | trying to create private KVM network mk-flannel-708138 192.168.39.0/24...
	I0120 16:42:14.186152 2195552 main.go:141] libmachine: (flannel-708138) DBG | private KVM network mk-flannel-708138 192.168.39.0/24 created
	I0120 16:42:14.186196 2195552 main.go:141] libmachine: (flannel-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138 ...
	I0120 16:42:14.186211 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:14.186124 2195575 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:14.186224 2195552 main.go:141] libmachine: (flannel-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:42:14.186244 2195552 main.go:141] libmachine: (flannel-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:42:14.497280 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:14.497140 2195575 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa...
	I0120 16:42:14.598947 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:14.598775 2195575 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/flannel-708138.rawdisk...
	I0120 16:42:14.598993 2195552 main.go:141] libmachine: (flannel-708138) DBG | Writing magic tar header
	I0120 16:42:14.599014 2195552 main.go:141] libmachine: (flannel-708138) DBG | Writing SSH key tar header
	I0120 16:42:14.599031 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:14.598935 2195575 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138 ...
	I0120 16:42:14.599050 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138
	I0120 16:42:14.599142 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138 (perms=drwx------)
	I0120 16:42:14.599189 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:42:14.599206 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:42:14.599249 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:42:14.599304 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:14.599318 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:42:14.599332 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:42:14.599345 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:42:14.599359 2195552 main.go:141] libmachine: (flannel-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:42:14.599370 2195552 main.go:141] libmachine: (flannel-708138) creating domain...
	I0120 16:42:14.599425 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:42:14.599450 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:42:14.599463 2195552 main.go:141] libmachine: (flannel-708138) DBG | checking permissions on dir: /home
	I0120 16:42:14.599473 2195552 main.go:141] libmachine: (flannel-708138) DBG | skipping /home - not owner
	I0120 16:42:14.600709 2195552 main.go:141] libmachine: (flannel-708138) define libvirt domain using xml: 
	I0120 16:42:14.600734 2195552 main.go:141] libmachine: (flannel-708138) <domain type='kvm'>
	I0120 16:42:14.600750 2195552 main.go:141] libmachine: (flannel-708138)   <name>flannel-708138</name>
	I0120 16:42:14.600763 2195552 main.go:141] libmachine: (flannel-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:42:14.600774 2195552 main.go:141] libmachine: (flannel-708138)   <vcpu>2</vcpu>
	I0120 16:42:14.600781 2195552 main.go:141] libmachine: (flannel-708138)   <features>
	I0120 16:42:14.600789 2195552 main.go:141] libmachine: (flannel-708138)     <acpi/>
	I0120 16:42:14.600794 2195552 main.go:141] libmachine: (flannel-708138)     <apic/>
	I0120 16:42:14.600804 2195552 main.go:141] libmachine: (flannel-708138)     <pae/>
	I0120 16:42:14.600811 2195552 main.go:141] libmachine: (flannel-708138)     
	I0120 16:42:14.600816 2195552 main.go:141] libmachine: (flannel-708138)   </features>
	I0120 16:42:14.600821 2195552 main.go:141] libmachine: (flannel-708138)   <cpu mode='host-passthrough'>
	I0120 16:42:14.600826 2195552 main.go:141] libmachine: (flannel-708138)   
	I0120 16:42:14.600830 2195552 main.go:141] libmachine: (flannel-708138)   </cpu>
	I0120 16:42:14.600835 2195552 main.go:141] libmachine: (flannel-708138)   <os>
	I0120 16:42:14.600841 2195552 main.go:141] libmachine: (flannel-708138)     <type>hvm</type>
	I0120 16:42:14.600849 2195552 main.go:141] libmachine: (flannel-708138)     <boot dev='cdrom'/>
	I0120 16:42:14.600856 2195552 main.go:141] libmachine: (flannel-708138)     <boot dev='hd'/>
	I0120 16:42:14.600882 2195552 main.go:141] libmachine: (flannel-708138)     <bootmenu enable='no'/>
	I0120 16:42:14.600898 2195552 main.go:141] libmachine: (flannel-708138)   </os>
	I0120 16:42:14.600903 2195552 main.go:141] libmachine: (flannel-708138)   <devices>
	I0120 16:42:14.600908 2195552 main.go:141] libmachine: (flannel-708138)     <disk type='file' device='cdrom'>
	I0120 16:42:14.600919 2195552 main.go:141] libmachine: (flannel-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/boot2docker.iso'/>
	I0120 16:42:14.600926 2195552 main.go:141] libmachine: (flannel-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:42:14.600934 2195552 main.go:141] libmachine: (flannel-708138)       <readonly/>
	I0120 16:42:14.600944 2195552 main.go:141] libmachine: (flannel-708138)     </disk>
	I0120 16:42:14.600954 2195552 main.go:141] libmachine: (flannel-708138)     <disk type='file' device='disk'>
	I0120 16:42:14.600966 2195552 main.go:141] libmachine: (flannel-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:42:14.600979 2195552 main.go:141] libmachine: (flannel-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/flannel-708138.rawdisk'/>
	I0120 16:42:14.600987 2195552 main.go:141] libmachine: (flannel-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:42:14.600992 2195552 main.go:141] libmachine: (flannel-708138)     </disk>
	I0120 16:42:14.601014 2195552 main.go:141] libmachine: (flannel-708138)     <interface type='network'>
	I0120 16:42:14.601040 2195552 main.go:141] libmachine: (flannel-708138)       <source network='mk-flannel-708138'/>
	I0120 16:42:14.601053 2195552 main.go:141] libmachine: (flannel-708138)       <model type='virtio'/>
	I0120 16:42:14.601064 2195552 main.go:141] libmachine: (flannel-708138)     </interface>
	I0120 16:42:14.601071 2195552 main.go:141] libmachine: (flannel-708138)     <interface type='network'>
	I0120 16:42:14.601077 2195552 main.go:141] libmachine: (flannel-708138)       <source network='default'/>
	I0120 16:42:14.601084 2195552 main.go:141] libmachine: (flannel-708138)       <model type='virtio'/>
	I0120 16:42:14.601089 2195552 main.go:141] libmachine: (flannel-708138)     </interface>
	I0120 16:42:14.601093 2195552 main.go:141] libmachine: (flannel-708138)     <serial type='pty'>
	I0120 16:42:14.601097 2195552 main.go:141] libmachine: (flannel-708138)       <target port='0'/>
	I0120 16:42:14.601104 2195552 main.go:141] libmachine: (flannel-708138)     </serial>
	I0120 16:42:14.601109 2195552 main.go:141] libmachine: (flannel-708138)     <console type='pty'>
	I0120 16:42:14.601124 2195552 main.go:141] libmachine: (flannel-708138)       <target type='serial' port='0'/>
	I0120 16:42:14.601131 2195552 main.go:141] libmachine: (flannel-708138)     </console>
	I0120 16:42:14.601138 2195552 main.go:141] libmachine: (flannel-708138)     <rng model='virtio'>
	I0120 16:42:14.601169 2195552 main.go:141] libmachine: (flannel-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:42:14.601195 2195552 main.go:141] libmachine: (flannel-708138)     </rng>
	I0120 16:42:14.601210 2195552 main.go:141] libmachine: (flannel-708138)     
	I0120 16:42:14.601219 2195552 main.go:141] libmachine: (flannel-708138)     
	I0120 16:42:14.601227 2195552 main.go:141] libmachine: (flannel-708138)   </devices>
	I0120 16:42:14.601237 2195552 main.go:141] libmachine: (flannel-708138) </domain>
	I0120 16:42:14.601247 2195552 main.go:141] libmachine: (flannel-708138) 
	I0120 16:42:14.605539 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:98:23:20 in network default
	I0120 16:42:14.606284 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:14.606298 2195552 main.go:141] libmachine: (flannel-708138) starting domain...
	I0120 16:42:14.606311 2195552 main.go:141] libmachine: (flannel-708138) ensuring networks are active...
	I0120 16:42:14.607077 2195552 main.go:141] libmachine: (flannel-708138) Ensuring network default is active
	I0120 16:42:14.607462 2195552 main.go:141] libmachine: (flannel-708138) Ensuring network mk-flannel-708138 is active
	I0120 16:42:14.607874 2195552 main.go:141] libmachine: (flannel-708138) getting domain XML...
	I0120 16:42:14.608713 2195552 main.go:141] libmachine: (flannel-708138) creating domain...
	I0120 16:42:15.915484 2195552 main.go:141] libmachine: (flannel-708138) waiting for IP...
	I0120 16:42:15.916499 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:15.917204 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:15.917273 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:15.917183 2195575 retry.go:31] will retry after 279.310903ms: waiting for domain to come up
	I0120 16:42:16.197997 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:16.198852 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:16.198921 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:16.198829 2195575 retry.go:31] will retry after 258.767964ms: waiting for domain to come up
	I0120 16:42:16.460008 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:16.461069 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:16.461120 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:16.461043 2195575 retry.go:31] will retry after 341.155332ms: waiting for domain to come up
	I0120 16:42:16.803638 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:16.804327 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:16.804408 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:16.804323 2195575 retry.go:31] will retry after 383.09614ms: waiting for domain to come up
	I0120 16:42:17.188628 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:17.189264 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:17.189318 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:17.189208 2195575 retry.go:31] will retry after 564.554431ms: waiting for domain to come up
	I0120 16:42:17.755083 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:17.755622 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:17.755648 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:17.755592 2195575 retry.go:31] will retry after 940.517414ms: waiting for domain to come up
	I0120 16:42:18.697699 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:18.698174 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:18.698226 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:18.698175 2195575 retry.go:31] will retry after 898.119207ms: waiting for domain to come up
	I0120 16:42:19.598404 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:19.599000 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:19.599036 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:19.598959 2195575 retry.go:31] will retry after 1.334417666s: waiting for domain to come up
	I0120 16:42:20.935560 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:20.936367 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:20.936397 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:20.936331 2195575 retry.go:31] will retry after 1.157267783s: waiting for domain to come up
	I0120 16:42:22.095624 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:22.096185 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:22.096215 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:22.096157 2195575 retry.go:31] will retry after 2.002442288s: waiting for domain to come up
	I0120 16:42:24.100409 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:24.100999 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:24.101039 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:24.100947 2195575 retry.go:31] will retry after 2.742671652s: waiting for domain to come up
	I0120 16:42:26.844863 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:26.845456 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:26.845487 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:26.845432 2195575 retry.go:31] will retry after 3.573534321s: waiting for domain to come up
	I0120 16:42:30.420362 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:30.421027 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:30.421059 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:30.420994 2195575 retry.go:31] will retry after 3.907613054s: waiting for domain to come up
	I0120 16:42:34.330849 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:34.331412 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:34.331455 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:34.331358 2195575 retry.go:31] will retry after 5.584556774s: waiting for domain to come up
	I0120 16:42:39.917690 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918271 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has current primary IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918301 2195552 main.go:141] libmachine: (flannel-708138) found domain IP: 192.168.39.206
	I0120 16:42:39.918314 2195552 main.go:141] libmachine: (flannel-708138) reserving static IP address...
	I0120 16:42:39.918709 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find host DHCP lease matching {name: "flannel-708138", mac: "52:54:00:ff:a2:3d", ip: "192.168.39.206"} in network mk-flannel-708138
	I0120 16:42:40.002772 2195552 main.go:141] libmachine: (flannel-708138) DBG | Getting to WaitForSSH function...
	I0120 16:42:40.002812 2195552 main.go:141] libmachine: (flannel-708138) reserved static IP address 192.168.39.206 for domain flannel-708138
	I0120 16:42:40.002826 2195552 main.go:141] libmachine: (flannel-708138) waiting for SSH...
	I0120 16:42:40.005462 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.005818 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.005841 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.006030 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH client type: external
	I0120 16:42:40.006070 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa (-rw-------)
	I0120 16:42:40.006114 2195552 main.go:141] libmachine: (flannel-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:42:40.006136 2195552 main.go:141] libmachine: (flannel-708138) DBG | About to run SSH command:
	I0120 16:42:40.006152 2195552 main.go:141] libmachine: (flannel-708138) DBG | exit 0
	I0120 16:42:40.135269 2195552 main.go:141] libmachine: (flannel-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:42:40.135526 2195552 main.go:141] libmachine: (flannel-708138) KVM machine creation complete
	I0120 16:42:40.135876 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:40.136615 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.136828 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.137011 2195552 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:42:40.137029 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:42:40.138406 2195552 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:42:40.138423 2195552 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:42:40.138452 2195552 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:42:40.138464 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.140844 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141163 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.141205 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141321 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.141497 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141697 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141855 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.142022 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.142224 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.142236 2195552 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:42:40.250660 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.250692 2195552 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:42:40.250703 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.253520 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.253863 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.253919 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.254020 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.254263 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254462 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254593 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.254769 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.254954 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.254966 2195552 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:42:40.371879 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:42:40.371990 2195552 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:42:40.372011 2195552 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:42:40.372023 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372291 2195552 buildroot.go:166] provisioning hostname "flannel-708138"
	I0120 16:42:40.372320 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372554 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.375287 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375686 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.375717 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375925 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.376151 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376353 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376496 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.376659 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.376836 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.376848 2195552 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-708138 && echo "flannel-708138" | sudo tee /etc/hostname
	I0120 16:42:40.501787 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-708138
	
	I0120 16:42:40.501820 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.504836 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505242 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.505267 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.505652 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505809 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505915 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.506087 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.506277 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.506293 2195552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:42:40.628479 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.628514 2195552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:42:40.628580 2195552 buildroot.go:174] setting up certificates
	I0120 16:42:40.628599 2195552 provision.go:84] configureAuth start
	I0120 16:42:40.628618 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.628897 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:40.631696 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632058 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.632103 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632242 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.634596 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.634957 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.634983 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.635147 2195552 provision.go:143] copyHostCerts
	I0120 16:42:40.635203 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:42:40.635213 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:42:40.635282 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:42:40.635416 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:42:40.635427 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:42:40.635466 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:42:40.635533 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:42:40.635540 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:42:40.635560 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:42:40.635622 2195552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.flannel-708138 san=[127.0.0.1 192.168.39.206 flannel-708138 localhost minikube]
	I0120 16:42:40.788476 2195552 provision.go:177] copyRemoteCerts
	I0120 16:42:40.788537 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:42:40.788565 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.791448 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.791862 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.791889 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.792091 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.792295 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.792425 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.792541 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:40.877555 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:42:40.904115 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0120 16:42:40.933842 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:42:40.962366 2195552 provision.go:87] duration metric: took 333.749236ms to configureAuth
	I0120 16:42:40.962401 2195552 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:42:40.962639 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:40.962740 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.965753 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966102 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.966137 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966346 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.966578 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966794 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966936 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.967135 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.967319 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.967333 2195552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:42:41.219615 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:42:41.219649 2195552 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:42:41.219660 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetURL
	I0120 16:42:41.220953 2195552 main.go:141] libmachine: (flannel-708138) DBG | using libvirt version 6000000
	I0120 16:42:41.223183 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223607 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.223639 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223729 2195552 main.go:141] libmachine: Docker is up and running!
	I0120 16:42:41.223743 2195552 main.go:141] libmachine: Reticulating splines...
	I0120 16:42:41.223752 2195552 client.go:171] duration metric: took 27.127384878s to LocalClient.Create
	I0120 16:42:41.223781 2195552 start.go:167] duration metric: took 27.127453023s to libmachine.API.Create "flannel-708138"
	I0120 16:42:41.223794 2195552 start.go:293] postStartSetup for "flannel-708138" (driver="kvm2")
	I0120 16:42:41.223803 2195552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:42:41.223831 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.224099 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:42:41.224137 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.226284 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226568 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.226594 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226810 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.226999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.227158 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.227283 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.313516 2195552 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:42:41.318553 2195552 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:42:41.318588 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:42:41.318691 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:42:41.318822 2195552 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:42:41.318966 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:42:41.329039 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:41.359288 2195552 start.go:296] duration metric: took 135.474673ms for postStartSetup
	I0120 16:42:41.359376 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:41.360116 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.363418 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.363768 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.363797 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.364037 2195552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json ...
	I0120 16:42:41.364306 2195552 start.go:128] duration metric: took 27.289215285s to createHost
	I0120 16:42:41.364339 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.366928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367308 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.367345 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367538 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.367729 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367894 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.368153 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:41.368324 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:41.368333 2195552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:42:41.479683 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391361.443756218
	
	I0120 16:42:41.479715 2195552 fix.go:216] guest clock: 1737391361.443756218
	I0120 16:42:41.479725 2195552 fix.go:229] Guest: 2025-01-20 16:42:41.443756218 +0000 UTC Remote: 2025-01-20 16:42:41.364324183 +0000 UTC m=+27.417363622 (delta=79.432035ms)
	I0120 16:42:41.479753 2195552 fix.go:200] guest clock delta is within tolerance: 79.432035ms
	I0120 16:42:41.479760 2195552 start.go:83] releasing machines lock for "flannel-708138", held for 27.404795771s
	I0120 16:42:41.479795 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.480084 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.483114 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483496 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.483519 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483702 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484306 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484533 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484636 2195552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:42:41.484681 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.484751 2195552 ssh_runner.go:195] Run: cat /version.json
	I0120 16:42:41.484776 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.487833 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.487927 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488372 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488399 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488422 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488436 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488512 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488602 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488694 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488757 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488853 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.488899 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.489003 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.489094 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.599954 2195552 ssh_runner.go:195] Run: systemctl --version
	I0120 16:42:41.607089 2195552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:42:41.776515 2195552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:42:41.783949 2195552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:42:41.784065 2195552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:42:41.801321 2195552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:42:41.801352 2195552 start.go:495] detecting cgroup driver to use...
	I0120 16:42:41.801424 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:42:41.819201 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:42:41.834731 2195552 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:42:41.834824 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:42:41.850093 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:42:41.865030 2195552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:42:41.992116 2195552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:42:42.163387 2195552 docker.go:233] disabling docker service ...
	I0120 16:42:42.163482 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:42:42.179064 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:42:42.194832 2195552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:42:42.325738 2195552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:42:42.463211 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:42:42.478104 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:42:42.498097 2195552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:42:42.498191 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.510081 2195552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:42:42.510166 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.523170 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.535401 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.550805 2195552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:42:42.563405 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.575131 2195552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.594402 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.606285 2195552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:42:42.616785 2195552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:42:42.616863 2195552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:42:42.631836 2195552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:42:42.643068 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:42.774308 2195552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:42:42.883190 2195552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:42:42.883286 2195552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:42:42.889890 2195552 start.go:563] Will wait 60s for crictl version
	I0120 16:42:42.889963 2195552 ssh_runner.go:195] Run: which crictl
	I0120 16:42:42.895340 2195552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:42:42.953318 2195552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:42:42.953426 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:42.988671 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:43.023504 2195552 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:42:43.024796 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:43.030238 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.030849 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:43.030886 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.031145 2195552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:42:43.036477 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:43.051619 2195552 kubeadm.go:883] updating cluster {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:42:43.051797 2195552 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:43.051875 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:43.095932 2195552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:42:43.096025 2195552 ssh_runner.go:195] Run: which lz4
	I0120 16:42:43.101037 2195552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:42:43.106099 2195552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:42:43.106139 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:42:44.760474 2195552 crio.go:462] duration metric: took 1.659486395s to copy over tarball
	I0120 16:42:44.760562 2195552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:42:47.285354 2195552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.524736784s)
	I0120 16:42:47.285446 2195552 crio.go:469] duration metric: took 2.524929922s to extract the tarball
	I0120 16:42:47.285471 2195552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:42:47.324858 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:47.372415 2195552 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:42:47.372446 2195552 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:42:47.372457 2195552 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.32.0 crio true true} ...
	I0120 16:42:47.372643 2195552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0120 16:42:47.372722 2195552 ssh_runner.go:195] Run: crio config
	I0120 16:42:47.422488 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:42:47.422519 2195552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:42:47.422554 2195552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-708138 NodeName:flannel-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:42:47.422786 2195552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:42:47.422890 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:42:47.433846 2195552 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:42:47.433938 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:42:47.444578 2195552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0120 16:42:47.461856 2195552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:42:47.478765 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 16:42:47.495925 2195552 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0120 16:42:47.500231 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:47.513503 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:47.646909 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:42:47.666731 2195552 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138 for IP: 192.168.39.206
	I0120 16:42:47.666760 2195552 certs.go:194] generating shared ca certs ...
	I0120 16:42:47.666784 2195552 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.666988 2195552 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:42:47.667058 2195552 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:42:47.667071 2195552 certs.go:256] generating profile certs ...
	I0120 16:42:47.667161 2195552 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key
	I0120 16:42:47.667181 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt with IP's: []
	I0120 16:42:47.957732 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt ...
	I0120 16:42:47.957764 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt: {Name:mk2f64b37e464c896144cdc44cfc1fc4f548c045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.957936 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key ...
	I0120 16:42:47.957947 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key: {Name:mk1b16a48ea06faf15a739043d6a562a12842ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.958021 2195552 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76
	I0120 16:42:47.958037 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0120 16:42:48.237739 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 ...
	I0120 16:42:48.237772 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76: {Name:mk2d82f1b438734a66d4bca5d26768f17a50dbb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.237934 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 ...
	I0120 16:42:48.237945 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76: {Name:mk5552939933befe1ef0d3a7fff6d21fdf398d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.238016 2195552 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt
	I0120 16:42:48.238119 2195552 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key
	I0120 16:42:48.238183 2195552 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key
	I0120 16:42:48.238205 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt with IP's: []
	I0120 16:42:48.328536 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt ...
	I0120 16:42:48.328597 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt: {Name:mk71903f0dc1f4b5602bf3f87a72991a3294fe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328771 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key ...
	I0120 16:42:48.328786 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key: {Name:mkb6cb1df1b5d7b66259c1ec746be1ba174817a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328986 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:42:48.329026 2195552 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:42:48.329038 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:42:48.329061 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:42:48.329085 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:42:48.329113 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:42:48.329155 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:48.329806 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:42:48.377022 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:42:48.423232 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:42:48.452106 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:42:48.484435 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:42:48.514707 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:42:48.541159 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:42:48.642490 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:42:48.668101 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:42:48.696379 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:42:48.722994 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:42:48.748145 2195552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:42:48.766358 2195552 ssh_runner.go:195] Run: openssl version
	I0120 16:42:48.773160 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:42:48.785416 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791084 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791158 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.797932 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:42:48.811525 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:42:48.826046 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832200 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832280 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.838879 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:42:48.851808 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:42:48.865253 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870647 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870724 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.877010 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:42:48.889902 2195552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:42:48.894559 2195552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:42:48.894640 2195552 kubeadm.go:392] StartCluster: {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:42:48.894779 2195552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:42:48.894890 2195552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:42:48.940887 2195552 cri.go:89] found id: ""
	I0120 16:42:48.940984 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:42:48.952531 2195552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:42:48.963786 2195552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:42:48.974250 2195552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:42:48.974278 2195552 kubeadm.go:157] found existing configuration files:
	
	I0120 16:42:48.974338 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:42:48.984449 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:42:48.984527 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:42:48.995330 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:42:49.006034 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:42:49.006104 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:42:49.017110 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.027295 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:42:49.027368 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.040812 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:42:49.051290 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:42:49.051377 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:42:49.066485 2195552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:42:49.134741 2195552 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:42:49.134946 2195552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:42:49.249160 2195552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:42:49.249323 2195552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:42:49.249481 2195552 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:42:49.263796 2195552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:42:49.298061 2195552 out.go:235]   - Generating certificates and keys ...
	I0120 16:42:49.298271 2195552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:42:49.298360 2195552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:42:49.326405 2195552 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:42:49.603739 2195552 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:42:50.017706 2195552 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:42:50.212861 2195552 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:42:50.332005 2195552 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:42:50.332365 2195552 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.576915 2195552 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:42:50.577225 2195552 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.922540 2195552 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:42:51.148072 2195552 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:42:51.262833 2195552 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:42:51.262930 2195552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:42:51.404906 2195552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:42:51.648067 2195552 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:42:51.759756 2195552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:42:51.962741 2195552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:42:52.453700 2195552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:42:52.456041 2195552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:42:52.459366 2195552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:42:52.461278 2195552 out.go:235]   - Booting up control plane ...
	I0120 16:42:52.461391 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:42:52.461507 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:42:52.461588 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:42:52.484769 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:42:52.493367 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:42:52.493452 2195552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:42:52.663075 2195552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:42:52.664096 2195552 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:42:54.164599 2195552 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501261507s
	I0120 16:42:54.164721 2195552 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:42:59.162803 2195552 kubeadm.go:310] [api-check] The API server is healthy after 5.001059076s
	I0120 16:42:59.182087 2195552 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:42:59.202928 2195552 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:42:59.251598 2195552 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:42:59.251870 2195552 kubeadm.go:310] [mark-control-plane] Marking the node flannel-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:42:59.267327 2195552 kubeadm.go:310] [bootstrap-token] Using token: 0uevl5.w9rl7hild7q3qmvj
	I0120 16:42:59.268924 2195552 out.go:235]   - Configuring RBAC rules ...
	I0120 16:42:59.269076 2195552 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:42:59.276545 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:42:59.290974 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:42:59.296882 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:42:59.304061 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:42:59.311324 2195552 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:42:59.571703 2195552 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:42:59.999391 2195552 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:00.569884 2195552 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:00.572667 2195552 kubeadm.go:310] 
	I0120 16:43:00.572758 2195552 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:00.572768 2195552 kubeadm.go:310] 
	I0120 16:43:00.572931 2195552 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:00.572966 2195552 kubeadm.go:310] 
	I0120 16:43:00.573016 2195552 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:00.573090 2195552 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:00.573154 2195552 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:00.573163 2195552 kubeadm.go:310] 
	I0120 16:43:00.573251 2195552 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:00.573265 2195552 kubeadm.go:310] 
	I0120 16:43:00.573345 2195552 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:00.573378 2195552 kubeadm.go:310] 
	I0120 16:43:00.573475 2195552 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:00.573604 2195552 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:00.573697 2195552 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:00.573707 2195552 kubeadm.go:310] 
	I0120 16:43:00.573823 2195552 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:00.573923 2195552 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:00.573930 2195552 kubeadm.go:310] 
	I0120 16:43:00.574048 2195552 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574201 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:00.574235 2195552 kubeadm.go:310] 	--control-plane 
	I0120 16:43:00.574258 2195552 kubeadm.go:310] 
	I0120 16:43:00.574400 2195552 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:00.574432 2195552 kubeadm.go:310] 
	I0120 16:43:00.574590 2195552 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574795 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:00.575007 2195552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:00.575049 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:43:00.576721 2195552 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0120 16:43:00.577996 2195552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 16:43:00.584504 2195552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 16:43:00.584526 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0120 16:43:00.610147 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 16:43:01.108354 2195552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:01.108472 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.108474 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=flannel-708138 minikube.k8s.io/primary=true
	I0120 16:43:01.153107 2195552 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:01.323188 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.823589 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.324096 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.823844 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.323872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.823872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.323604 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.428740 2195552 kubeadm.go:1113] duration metric: took 3.320348756s to wait for elevateKubeSystemPrivileges
	I0120 16:43:04.428788 2195552 kubeadm.go:394] duration metric: took 15.534153444s to StartCluster
	I0120 16:43:04.428816 2195552 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.428921 2195552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:04.430989 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.431307 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:04.431303 2195552 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:04.431336 2195552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:04.431519 2195552 addons.go:69] Setting storage-provisioner=true in profile "flannel-708138"
	I0120 16:43:04.431529 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:04.431538 2195552 addons.go:238] Setting addon storage-provisioner=true in "flannel-708138"
	I0120 16:43:04.431579 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.431586 2195552 addons.go:69] Setting default-storageclass=true in profile "flannel-708138"
	I0120 16:43:04.431621 2195552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-708138"
	I0120 16:43:04.432070 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432112 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.432118 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432151 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.435123 2195552 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:04.436595 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:04.449431 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0120 16:43:04.449469 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0120 16:43:04.450031 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450065 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450628 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450657 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.450772 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450798 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.451074 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451199 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.451674 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.451723 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.455136 2195552 addons.go:238] Setting addon default-storageclass=true in "flannel-708138"
	I0120 16:43:04.455176 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.455442 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.455480 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.468668 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0120 16:43:04.469232 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.469794 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.469810 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.470234 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.470456 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.471939 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I0120 16:43:04.472364 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.472464 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.472904 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.472933 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.473322 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.473822 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.473860 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.474444 2195552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:04.475956 2195552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.475976 2195552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:04.475998 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.479414 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.479895 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.479928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.480056 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.480246 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.480426 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.480560 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.491228 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0120 16:43:04.491682 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.492333 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.492364 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.492740 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.492924 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.494696 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.494958 2195552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:04.494975 2195552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:04.494997 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.497642 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498099 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.498131 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498258 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.498486 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.498649 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.498811 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.741102 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:04.741114 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:04.889912 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.966678 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:05.319499 2195552 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:05.321208 2195552 node_ready.go:35] waiting up to 15m0s for node "flannel-708138" to be "Ready" ...
	I0120 16:43:05.578109 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578136 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578257 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578282 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578512 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.578539 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.578550 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578558 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580280 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580297 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580296 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580313 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580323 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580333 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.580340 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580334 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580582 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580586 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580600 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.591009 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.591045 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.591353 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.591368 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.591377 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.593936 2195552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:05.595175 2195552 addons.go:514] duration metric: took 1.163842267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:05.824160 2195552 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-708138" context rescaled to 1 replicas
	I0120 16:43:07.325793 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:09.824714 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:11.826450 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:14.324786 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:16.325269 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:18.393718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:20.825198 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:23.325444 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:25.325819 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.325884 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:29.825256 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:31.826538 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:34.324997 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:36.326016 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.825101 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:40.825164 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:43.325186 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:45.825830 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:48.325148 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:50.325324 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:52.825144 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:54.825386 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:57.325511 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:59.825432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:01.826019 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:04.324951 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:06.327813 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:08.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:10.825618 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:13.325998 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:15.824909 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:18.325253 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:20.325659 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:22.825615 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:25.324569 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:27.324668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:29.325114 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:31.824591 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:33.825417 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:36.325425 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:38.326595 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:40.825370 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:43.325332 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:45.825470 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:48.325279 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:50.825752 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:53.326233 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:55.327674 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:57.824868 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:59.825796 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:02.325316 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:04.325859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:06.825325 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:09.325718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:11.825001 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:14.324938 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:16.325124 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:18.325501 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:20.825364 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:22.827208 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:25.325469 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:27.825982 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:30.325432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:32.325551 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:34.825047 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:36.825526 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:39.325753 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:41.825898 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:44.325151 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:46.325219 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:48.325661 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:50.826115 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:53.325524 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:55.825672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:57.825995 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:00.325672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:02.824695 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:04.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:07.325274 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:09.325798 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:11.824561 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:13.825167 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:15.825328 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:18.324814 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:20.824710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:22.825668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:25.325111 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:27.824859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:29.825200 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:32.328676 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:34.825710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:36.826122 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:39.324220 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:41.324710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:43.325287 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:45.325431 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:47.824648 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:49.825286 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:51.825539 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:53.825772 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:55.826486 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:58.324721 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:00.325134 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:02.825138 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324759 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324796 2195552 node_ready.go:38] duration metric: took 4m0.003559137s for node "flannel-708138" to be "Ready" ...
	I0120 16:47:05.327110 2195552 out.go:201] 
	W0120 16:47:05.328484 2195552 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0120 16:47:05.328509 2195552 out.go:270] * 
	* 
	W0120 16:47:05.329391 2195552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:47:05.331128 2195552 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (291.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (326.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:49:45.663525 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:49:52.135024 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:50:03.871910 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:50:19.839938 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:51:25.794003 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:51:41.343305 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:51:46.007408 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:52:13.708187 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/enable-default-cni-708138/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:52:14.249558 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:52:50.816010 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:53:04.172856 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:53:07.247463 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/kindnet-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:53:41.933289 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:54:09.636228 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:54:13.879163 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:54:27.236851 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:54:45.664053 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
E0120 16:54:52.134915 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/custom-flannel-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.241:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.241:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (242.174664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-806597" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-806597 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-806597 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.098µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-806597 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (234.750338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-806597 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo docker                        | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo cat                           | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo                               | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo find                          | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-708138 sudo crio                          | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-708138                                    | flannel-708138 | jenkins | v1.35.0 | 20 Jan 25 16:47 UTC | 20 Jan 25 16:47 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:42:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:42:32.008473 2197206 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:42:32.008621 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008635 2197206 out.go:358] Setting ErrFile to fd 2...
	I0120 16:42:32.008642 2197206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:42:32.008834 2197206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:42:32.009438 2197206 out.go:352] Setting JSON to false
	I0120 16:42:32.010574 2197206 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":30298,"bootTime":1737361054,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:42:32.010728 2197206 start.go:139] virtualization: kvm guest
	I0120 16:42:32.013230 2197206 out.go:177] * [bridge-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:42:32.014892 2197206 notify.go:220] Checking for updates...
	I0120 16:42:32.014906 2197206 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:42:32.016448 2197206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:42:32.017869 2197206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:42:32.019315 2197206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:32.020696 2197206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:42:32.022005 2197206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:42:32.023905 2197206 config.go:182] Loaded profile config "embed-certs-429406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024041 2197206 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:32.024168 2197206 config.go:182] Loaded profile config "old-k8s-version-806597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 16:42:32.024283 2197206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:42:32.065664 2197206 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:42:32.067124 2197206 start.go:297] selected driver: kvm2
	I0120 16:42:32.067147 2197206 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:42:32.067160 2197206 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:42:32.067963 2197206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.068068 2197206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 16:42:32.087530 2197206 install.go:137] /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0120 16:42:32.087602 2197206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:42:32.087872 2197206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:42:32.087908 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:42:32.087916 2197206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 16:42:32.087987 2197206 start.go:340] cluster config:
	{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0120 16:42:32.088138 2197206 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 16:42:32.090276 2197206 out.go:177] * Starting "bridge-708138" primary control-plane node in "bridge-708138" cluster
	I0120 16:42:30.420362 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:30.421027 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:30.421059 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:30.420994 2195575 retry.go:31] will retry after 3.907613054s: waiting for domain to come up
	I0120 16:42:32.091652 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:32.091722 2197206 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 16:42:32.091737 2197206 cache.go:56] Caching tarball of preloaded images
	I0120 16:42:32.091846 2197206 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 16:42:32.091859 2197206 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 16:42:32.091963 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:42:32.091983 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json: {Name:mk67d90943d59835916cc1f1dddad0547daa252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:32.092126 2197206 start.go:360] acquireMachinesLock for bridge-708138: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 16:42:34.330849 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:34.331412 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find current IP address of domain flannel-708138 in network mk-flannel-708138
	I0120 16:42:34.331455 2195552 main.go:141] libmachine: (flannel-708138) DBG | I0120 16:42:34.331358 2195575 retry.go:31] will retry after 5.584556774s: waiting for domain to come up
	I0120 16:42:41.479851 2197206 start.go:364] duration metric: took 9.387696864s to acquireMachinesLock for "bridge-708138"
	I0120 16:42:41.479942 2197206 start.go:93] Provisioning new machine with config: &{Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:42:41.480071 2197206 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 16:42:41.482328 2197206 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 16:42:41.482654 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:42:41.482727 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:42:41.499933 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0120 16:42:41.500357 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:42:41.500878 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:42:41.500905 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:42:41.501247 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:42:41.501477 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:42:41.501622 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:42:41.501777 2197206 start.go:159] libmachine.API.Create for "bridge-708138" (driver="kvm2")
	I0120 16:42:41.501811 2197206 client.go:168] LocalClient.Create starting
	I0120 16:42:41.501865 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
	I0120 16:42:41.501911 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.501942 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502018 2197206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
	I0120 16:42:41.502048 2197206 main.go:141] libmachine: Decoding PEM data...
	I0120 16:42:41.502079 2197206 main.go:141] libmachine: Parsing certificate...
	I0120 16:42:41.502119 2197206 main.go:141] libmachine: Running pre-create checks...
	I0120 16:42:41.502134 2197206 main.go:141] libmachine: (bridge-708138) Calling .PreCreateCheck
	I0120 16:42:41.502482 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:42:41.503075 2197206 main.go:141] libmachine: Creating machine...
	I0120 16:42:41.503098 2197206 main.go:141] libmachine: (bridge-708138) Calling .Create
	I0120 16:42:41.503237 2197206 main.go:141] libmachine: (bridge-708138) creating KVM machine...
	I0120 16:42:41.503270 2197206 main.go:141] libmachine: (bridge-708138) creating network...
	I0120 16:42:41.504580 2197206 main.go:141] libmachine: (bridge-708138) DBG | found existing default KVM network
	I0120 16:42:41.506204 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.505980 2197289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:9b:e5} reservation:<nil>}
	I0120 16:42:41.507221 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.507124 2197289 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:0e:01} reservation:<nil>}
	I0120 16:42:41.508246 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.508159 2197289 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:2c:a8} reservation:<nil>}
	I0120 16:42:41.509727 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.509645 2197289 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a19c0}
	I0120 16:42:41.509792 2197206 main.go:141] libmachine: (bridge-708138) DBG | created network xml: 
	I0120 16:42:41.509817 2197206 main.go:141] libmachine: (bridge-708138) DBG | <network>
	I0120 16:42:41.509828 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <name>mk-bridge-708138</name>
	I0120 16:42:41.509848 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <dns enable='no'/>
	I0120 16:42:41.509881 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509906 2197206 main.go:141] libmachine: (bridge-708138) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 16:42:41.509920 2197206 main.go:141] libmachine: (bridge-708138) DBG |     <dhcp>
	I0120 16:42:41.509931 2197206 main.go:141] libmachine: (bridge-708138) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 16:42:41.509939 2197206 main.go:141] libmachine: (bridge-708138) DBG |     </dhcp>
	I0120 16:42:41.509943 2197206 main.go:141] libmachine: (bridge-708138) DBG |   </ip>
	I0120 16:42:41.509948 2197206 main.go:141] libmachine: (bridge-708138) DBG |   
	I0120 16:42:41.509953 2197206 main.go:141] libmachine: (bridge-708138) DBG | </network>
	I0120 16:42:41.509966 2197206 main.go:141] libmachine: (bridge-708138) DBG | 
	I0120 16:42:41.515816 2197206 main.go:141] libmachine: (bridge-708138) DBG | trying to create private KVM network mk-bridge-708138 192.168.72.0/24...
	I0120 16:42:41.591057 2197206 main.go:141] libmachine: (bridge-708138) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:41.591103 2197206 main.go:141] libmachine: (bridge-708138) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 16:42:41.591115 2197206 main.go:141] libmachine: (bridge-708138) DBG | private KVM network mk-bridge-708138 192.168.72.0/24 created
	I0120 16:42:41.591137 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.590985 2197289 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:41.591176 2197206 main.go:141] libmachine: (bridge-708138) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 16:42:41.878512 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:41.878362 2197289 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa...
	I0120 16:42:39.917690 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918271 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has current primary IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:39.918301 2195552 main.go:141] libmachine: (flannel-708138) found domain IP: 192.168.39.206
	I0120 16:42:39.918314 2195552 main.go:141] libmachine: (flannel-708138) reserving static IP address...
	I0120 16:42:39.918709 2195552 main.go:141] libmachine: (flannel-708138) DBG | unable to find host DHCP lease matching {name: "flannel-708138", mac: "52:54:00:ff:a2:3d", ip: "192.168.39.206"} in network mk-flannel-708138
	I0120 16:42:40.002772 2195552 main.go:141] libmachine: (flannel-708138) DBG | Getting to WaitForSSH function...
	I0120 16:42:40.002812 2195552 main.go:141] libmachine: (flannel-708138) reserved static IP address 192.168.39.206 for domain flannel-708138
	I0120 16:42:40.002826 2195552 main.go:141] libmachine: (flannel-708138) waiting for SSH...
	I0120 16:42:40.005462 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.005818 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.005841 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.006030 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH client type: external
	I0120 16:42:40.006070 2195552 main.go:141] libmachine: (flannel-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa (-rw-------)
	I0120 16:42:40.006114 2195552 main.go:141] libmachine: (flannel-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:42:40.006136 2195552 main.go:141] libmachine: (flannel-708138) DBG | About to run SSH command:
	I0120 16:42:40.006152 2195552 main.go:141] libmachine: (flannel-708138) DBG | exit 0
	I0120 16:42:40.135269 2195552 main.go:141] libmachine: (flannel-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:42:40.135526 2195552 main.go:141] libmachine: (flannel-708138) KVM machine creation complete
	I0120 16:42:40.135876 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:40.136615 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.136828 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:40.137011 2195552 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:42:40.137029 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:42:40.138406 2195552 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:42:40.138423 2195552 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:42:40.138452 2195552 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:42:40.138464 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.140844 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141163 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.141205 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.141321 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.141497 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141697 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.141855 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.142022 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.142224 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.142236 2195552 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:42:40.250660 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.250692 2195552 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:42:40.250703 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.253520 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.253863 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.253919 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.254020 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.254263 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254462 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.254593 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.254769 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.254954 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.254966 2195552 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:42:40.371879 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:42:40.371990 2195552 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:42:40.372011 2195552 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:42:40.372023 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372291 2195552 buildroot.go:166] provisioning hostname "flannel-708138"
	I0120 16:42:40.372320 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.372554 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.375287 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375686 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.375717 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.375925 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.376151 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376353 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.376496 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.376659 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.376836 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.376848 2195552 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-708138 && echo "flannel-708138" | sudo tee /etc/hostname
	I0120 16:42:40.501787 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-708138
	
	I0120 16:42:40.501820 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.504836 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505242 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.505267 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.505435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.505652 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505809 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.505915 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.506087 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.506277 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.506293 2195552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:42:40.628479 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:42:40.628514 2195552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:42:40.628580 2195552 buildroot.go:174] setting up certificates
	I0120 16:42:40.628599 2195552 provision.go:84] configureAuth start
	I0120 16:42:40.628618 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetMachineName
	I0120 16:42:40.628897 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:40.631696 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632058 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.632103 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.632242 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.634596 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.634957 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.634983 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.635147 2195552 provision.go:143] copyHostCerts
	I0120 16:42:40.635203 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:42:40.635213 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:42:40.635282 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:42:40.635416 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:42:40.635427 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:42:40.635466 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:42:40.635533 2195552 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:42:40.635540 2195552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:42:40.635560 2195552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:42:40.635622 2195552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.flannel-708138 san=[127.0.0.1 192.168.39.206 flannel-708138 localhost minikube]
	I0120 16:42:40.788476 2195552 provision.go:177] copyRemoteCerts
	I0120 16:42:40.788537 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:42:40.788565 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.791448 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.791862 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.791889 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.792091 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.792295 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.792425 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.792541 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:40.877555 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:42:40.904115 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0120 16:42:40.933842 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:42:40.962366 2195552 provision.go:87] duration metric: took 333.749236ms to configureAuth
	I0120 16:42:40.962401 2195552 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:42:40.962639 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:42:40.962740 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:40.965753 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966102 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:40.966137 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:40.966346 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:40.966578 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966794 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:40.966936 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:40.967135 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:40.967319 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:40.967333 2195552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:42:41.219615 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:42:41.219649 2195552 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:42:41.219660 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetURL
	I0120 16:42:41.220953 2195552 main.go:141] libmachine: (flannel-708138) DBG | using libvirt version 6000000
	I0120 16:42:41.223183 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223607 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.223639 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.223729 2195552 main.go:141] libmachine: Docker is up and running!
	I0120 16:42:41.223743 2195552 main.go:141] libmachine: Reticulating splines...
	I0120 16:42:41.223752 2195552 client.go:171] duration metric: took 27.127384878s to LocalClient.Create
	I0120 16:42:41.223781 2195552 start.go:167] duration metric: took 27.127453023s to libmachine.API.Create "flannel-708138"
	I0120 16:42:41.223794 2195552 start.go:293] postStartSetup for "flannel-708138" (driver="kvm2")
	I0120 16:42:41.223803 2195552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:42:41.223831 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.224099 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:42:41.224137 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.226284 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226568 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.226594 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.226810 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.226999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.227158 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.227283 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.313516 2195552 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:42:41.318553 2195552 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:42:41.318588 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:42:41.318691 2195552 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:42:41.318822 2195552 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:42:41.318966 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:42:41.329039 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:41.359288 2195552 start.go:296] duration metric: took 135.474673ms for postStartSetup
	I0120 16:42:41.359376 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetConfigRaw
	I0120 16:42:41.360116 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.363418 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.363768 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.363797 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.364037 2195552 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/config.json ...
	I0120 16:42:41.364306 2195552 start.go:128] duration metric: took 27.289215285s to createHost
	I0120 16:42:41.364339 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.366928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367308 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.367345 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.367538 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.367729 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367894 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.367999 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.368153 2195552 main.go:141] libmachine: Using SSH client type: native
	I0120 16:42:41.368324 2195552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0120 16:42:41.368333 2195552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:42:41.479683 2195552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391361.443756218
	
	I0120 16:42:41.479715 2195552 fix.go:216] guest clock: 1737391361.443756218
	I0120 16:42:41.479725 2195552 fix.go:229] Guest: 2025-01-20 16:42:41.443756218 +0000 UTC Remote: 2025-01-20 16:42:41.364324183 +0000 UTC m=+27.417363622 (delta=79.432035ms)
	I0120 16:42:41.479753 2195552 fix.go:200] guest clock delta is within tolerance: 79.432035ms
	I0120 16:42:41.479760 2195552 start.go:83] releasing machines lock for "flannel-708138", held for 27.404795771s
	I0120 16:42:41.479795 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.480084 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:41.483114 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483496 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.483519 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.483702 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484306 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484533 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:42:41.484636 2195552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:42:41.484681 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.484751 2195552 ssh_runner.go:195] Run: cat /version.json
	I0120 16:42:41.484776 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:42:41.487833 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.487927 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488372 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488399 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488422 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:41.488436 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:41.488512 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488602 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:42:41.488694 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488757 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:42:41.488853 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.488899 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:42:41.489003 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.489094 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:42:41.599954 2195552 ssh_runner.go:195] Run: systemctl --version
	I0120 16:42:41.607089 2195552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:42:41.776515 2195552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:42:41.783949 2195552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:42:41.784065 2195552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:42:41.801321 2195552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:42:41.801352 2195552 start.go:495] detecting cgroup driver to use...
	I0120 16:42:41.801424 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:42:41.819201 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:42:41.834731 2195552 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:42:41.834824 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:42:41.850093 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:42:41.865030 2195552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:42:41.992116 2195552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:42:42.163387 2195552 docker.go:233] disabling docker service ...
	I0120 16:42:42.163482 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:42:42.179064 2195552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:42:42.194832 2195552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:42:42.325738 2195552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:42:42.463211 2195552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:42:42.478104 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:42:42.498097 2195552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:42:42.498191 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.510081 2195552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:42:42.510166 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.523170 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.535401 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.550805 2195552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:42:42.563405 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.575131 2195552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.594402 2195552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:42:42.606285 2195552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:42:42.616785 2195552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:42:42.616863 2195552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:42:42.631836 2195552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:42:42.643068 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:42.774308 2195552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:42:42.883190 2195552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:42:42.883286 2195552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:42:42.889890 2195552 start.go:563] Will wait 60s for crictl version
	I0120 16:42:42.889963 2195552 ssh_runner.go:195] Run: which crictl
	I0120 16:42:42.895340 2195552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:42:42.953318 2195552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:42:42.953426 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:42.988671 2195552 ssh_runner.go:195] Run: crio --version
	I0120 16:42:43.023504 2195552 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:42:43.024796 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetIP
	I0120 16:42:43.030238 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.030849 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:42:43.030886 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:42:43.031145 2195552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 16:42:43.036477 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:43.051619 2195552 kubeadm.go:883] updating cluster {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:42:43.051797 2195552 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:42:43.051875 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:43.095932 2195552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:42:43.096025 2195552 ssh_runner.go:195] Run: which lz4
	I0120 16:42:43.101037 2195552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:42:43.106099 2195552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:42:43.106139 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:42:42.022498 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022333 2197289 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk...
	I0120 16:42:42.022537 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing magic tar header
	I0120 16:42:42.022550 2197206 main.go:141] libmachine: (bridge-708138) DBG | Writing SSH key tar header
	I0120 16:42:42.022558 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:42.022472 2197289 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 ...
	I0120 16:42:42.022576 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138
	I0120 16:42:42.022676 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138 (perms=drwx------)
	I0120 16:42:42.022704 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
	I0120 16:42:42.022716 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
	I0120 16:42:42.022728 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:42:42.022745 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
	I0120 16:42:42.022762 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 16:42:42.022771 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home/jenkins
	I0120 16:42:42.022780 2197206 main.go:141] libmachine: (bridge-708138) DBG | checking permissions on dir: /home
	I0120 16:42:42.022821 2197206 main.go:141] libmachine: (bridge-708138) DBG | skipping /home - not owner
	I0120 16:42:42.022845 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
	I0120 16:42:42.022858 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
	I0120 16:42:42.022869 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 16:42:42.022883 2197206 main.go:141] libmachine: (bridge-708138) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 16:42:42.022898 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:42.024254 2197206 main.go:141] libmachine: (bridge-708138) define libvirt domain using xml: 
	I0120 16:42:42.024299 2197206 main.go:141] libmachine: (bridge-708138) <domain type='kvm'>
	I0120 16:42:42.024309 2197206 main.go:141] libmachine: (bridge-708138)   <name>bridge-708138</name>
	I0120 16:42:42.024317 2197206 main.go:141] libmachine: (bridge-708138)   <memory unit='MiB'>3072</memory>
	I0120 16:42:42.024329 2197206 main.go:141] libmachine: (bridge-708138)   <vcpu>2</vcpu>
	I0120 16:42:42.024341 2197206 main.go:141] libmachine: (bridge-708138)   <features>
	I0120 16:42:42.024352 2197206 main.go:141] libmachine: (bridge-708138)     <acpi/>
	I0120 16:42:42.024360 2197206 main.go:141] libmachine: (bridge-708138)     <apic/>
	I0120 16:42:42.024370 2197206 main.go:141] libmachine: (bridge-708138)     <pae/>
	I0120 16:42:42.024375 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024382 2197206 main.go:141] libmachine: (bridge-708138)   </features>
	I0120 16:42:42.024395 2197206 main.go:141] libmachine: (bridge-708138)   <cpu mode='host-passthrough'>
	I0120 16:42:42.024433 2197206 main.go:141] libmachine: (bridge-708138)   
	I0120 16:42:42.024460 2197206 main.go:141] libmachine: (bridge-708138)   </cpu>
	I0120 16:42:42.024482 2197206 main.go:141] libmachine: (bridge-708138)   <os>
	I0120 16:42:42.024498 2197206 main.go:141] libmachine: (bridge-708138)     <type>hvm</type>
	I0120 16:42:42.024508 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='cdrom'/>
	I0120 16:42:42.024514 2197206 main.go:141] libmachine: (bridge-708138)     <boot dev='hd'/>
	I0120 16:42:42.024522 2197206 main.go:141] libmachine: (bridge-708138)     <bootmenu enable='no'/>
	I0120 16:42:42.024526 2197206 main.go:141] libmachine: (bridge-708138)   </os>
	I0120 16:42:42.024533 2197206 main.go:141] libmachine: (bridge-708138)   <devices>
	I0120 16:42:42.024544 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='cdrom'>
	I0120 16:42:42.024558 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/boot2docker.iso'/>
	I0120 16:42:42.024574 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hdc' bus='scsi'/>
	I0120 16:42:42.024583 2197206 main.go:141] libmachine: (bridge-708138)       <readonly/>
	I0120 16:42:42.024604 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024617 2197206 main.go:141] libmachine: (bridge-708138)     <disk type='file' device='disk'>
	I0120 16:42:42.024629 2197206 main.go:141] libmachine: (bridge-708138)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 16:42:42.024646 2197206 main.go:141] libmachine: (bridge-708138)       <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/bridge-708138.rawdisk'/>
	I0120 16:42:42.024661 2197206 main.go:141] libmachine: (bridge-708138)       <target dev='hda' bus='virtio'/>
	I0120 16:42:42.024672 2197206 main.go:141] libmachine: (bridge-708138)     </disk>
	I0120 16:42:42.024682 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024691 2197206 main.go:141] libmachine: (bridge-708138)       <source network='mk-bridge-708138'/>
	I0120 16:42:42.024701 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024709 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024723 2197206 main.go:141] libmachine: (bridge-708138)     <interface type='network'>
	I0120 16:42:42.024747 2197206 main.go:141] libmachine: (bridge-708138)       <source network='default'/>
	I0120 16:42:42.024765 2197206 main.go:141] libmachine: (bridge-708138)       <model type='virtio'/>
	I0120 16:42:42.024776 2197206 main.go:141] libmachine: (bridge-708138)     </interface>
	I0120 16:42:42.024786 2197206 main.go:141] libmachine: (bridge-708138)     <serial type='pty'>
	I0120 16:42:42.024791 2197206 main.go:141] libmachine: (bridge-708138)       <target port='0'/>
	I0120 16:42:42.024796 2197206 main.go:141] libmachine: (bridge-708138)     </serial>
	I0120 16:42:42.024802 2197206 main.go:141] libmachine: (bridge-708138)     <console type='pty'>
	I0120 16:42:42.024807 2197206 main.go:141] libmachine: (bridge-708138)       <target type='serial' port='0'/>
	I0120 16:42:42.024814 2197206 main.go:141] libmachine: (bridge-708138)     </console>
	I0120 16:42:42.024823 2197206 main.go:141] libmachine: (bridge-708138)     <rng model='virtio'>
	I0120 16:42:42.024843 2197206 main.go:141] libmachine: (bridge-708138)       <backend model='random'>/dev/random</backend>
	I0120 16:42:42.024857 2197206 main.go:141] libmachine: (bridge-708138)     </rng>
	I0120 16:42:42.024871 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024886 2197206 main.go:141] libmachine: (bridge-708138)     
	I0120 16:42:42.024898 2197206 main.go:141] libmachine: (bridge-708138)   </devices>
	I0120 16:42:42.024905 2197206 main.go:141] libmachine: (bridge-708138) </domain>
	I0120 16:42:42.024917 2197206 main.go:141] libmachine: (bridge-708138) 
	I0120 16:42:42.029557 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:92:a4:fd in network default
	I0120 16:42:42.030218 2197206 main.go:141] libmachine: (bridge-708138) starting domain...
	I0120 16:42:42.030248 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:42.030257 2197206 main.go:141] libmachine: (bridge-708138) ensuring networks are active...
	I0120 16:42:42.031044 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network default is active
	I0120 16:42:42.031601 2197206 main.go:141] libmachine: (bridge-708138) Ensuring network mk-bridge-708138 is active
	I0120 16:42:42.032382 2197206 main.go:141] libmachine: (bridge-708138) getting domain XML...
	I0120 16:42:42.033582 2197206 main.go:141] libmachine: (bridge-708138) creating domain...
	I0120 16:42:43.399268 2197206 main.go:141] libmachine: (bridge-708138) waiting for IP...
	I0120 16:42:43.400313 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.400849 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.400943 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.400854 2197289 retry.go:31] will retry after 255.464218ms: waiting for domain to come up
	I0120 16:42:43.658464 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.659186 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.659219 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.659154 2197289 retry.go:31] will retry after 266.392686ms: waiting for domain to come up
	I0120 16:42:43.928079 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:43.928991 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:43.929026 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:43.928961 2197289 retry.go:31] will retry after 451.40279ms: waiting for domain to come up
	I0120 16:42:44.382040 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.382828 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.382874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.382787 2197289 retry.go:31] will retry after 443.359812ms: waiting for domain to come up
	I0120 16:42:44.827744 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:44.828300 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:44.828402 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:44.828290 2197289 retry.go:31] will retry after 735.012761ms: waiting for domain to come up
	I0120 16:42:45.565132 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:45.565770 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:45.565798 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:45.565735 2197289 retry.go:31] will retry after 744.342493ms: waiting for domain to come up
	I0120 16:42:46.311596 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:46.312274 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:46.312307 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:46.312254 2197289 retry.go:31] will retry after 1.044734911s: waiting for domain to come up
	I0120 16:42:44.760474 2195552 crio.go:462] duration metric: took 1.659486395s to copy over tarball
	I0120 16:42:44.760562 2195552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:42:47.285354 2195552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.524736784s)
	I0120 16:42:47.285446 2195552 crio.go:469] duration metric: took 2.524929922s to extract the tarball
	I0120 16:42:47.285471 2195552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:42:47.324858 2195552 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:42:47.372415 2195552 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:42:47.372446 2195552 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:42:47.372457 2195552 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.32.0 crio true true} ...
	I0120 16:42:47.372643 2195552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0120 16:42:47.372722 2195552 ssh_runner.go:195] Run: crio config
	I0120 16:42:47.422488 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:42:47.422519 2195552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:42:47.422554 2195552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-708138 NodeName:flannel-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:42:47.422786 2195552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:42:47.422890 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:42:47.433846 2195552 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:42:47.433938 2195552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:42:47.444578 2195552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0120 16:42:47.461856 2195552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:42:47.478765 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 16:42:47.495925 2195552 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0120 16:42:47.500231 2195552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:42:47.513503 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:42:47.646909 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:42:47.666731 2195552 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138 for IP: 192.168.39.206
	I0120 16:42:47.666760 2195552 certs.go:194] generating shared ca certs ...
	I0120 16:42:47.666784 2195552 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.666988 2195552 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:42:47.667058 2195552 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:42:47.667071 2195552 certs.go:256] generating profile certs ...
	I0120 16:42:47.667161 2195552 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key
	I0120 16:42:47.667181 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt with IP's: []
	I0120 16:42:47.957732 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt ...
	I0120 16:42:47.957764 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.crt: {Name:mk2f64b37e464c896144cdc44cfc1fc4f548c045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.957936 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key ...
	I0120 16:42:47.957947 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/client.key: {Name:mk1b16a48ea06faf15a739043d6a562a12842ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:47.958021 2195552 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76
	I0120 16:42:47.958037 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0120 16:42:48.237739 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 ...
	I0120 16:42:48.237772 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76: {Name:mk2d82f1b438734a66d4bca5d26768f17a50dbb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.237934 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 ...
	I0120 16:42:48.237945 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76: {Name:mk5552939933befe1ef0d3a7fff6d21fdf398d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.238016 2195552 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt
	I0120 16:42:48.238119 2195552 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key.ebc3dc76 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key
	I0120 16:42:48.238183 2195552 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key
	I0120 16:42:48.238205 2195552 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt with IP's: []
	I0120 16:42:48.328536 2195552 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt ...
	I0120 16:42:48.328597 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt: {Name:mk71903f0dc1f4b5602bf3f87a72991a3294fe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328771 2195552 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key ...
	I0120 16:42:48.328786 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key: {Name:mkb6cb1df1b5d7b66259c1ec746be1ba174817a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:42:48.328986 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:42:48.329026 2195552 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:42:48.329038 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:42:48.329061 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:42:48.329085 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:42:48.329113 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:42:48.329155 2195552 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:42:48.329806 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:42:48.377022 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:42:48.423232 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:42:48.452106 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:42:48.484435 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:42:48.514707 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:42:48.541159 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:42:48.642490 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/flannel-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:42:48.668101 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:42:48.696379 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:42:48.722994 2195552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:42:48.748145 2195552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:42:48.766358 2195552 ssh_runner.go:195] Run: openssl version
	I0120 16:42:48.773160 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:42:48.785416 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791084 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.791158 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:42:48.797932 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:42:48.811525 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:42:48.826046 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832200 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.832280 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:42:48.838879 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:42:48.851808 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:42:48.865253 2195552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870647 2195552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.870724 2195552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:42:48.877010 2195552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:42:48.889902 2195552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:42:48.894559 2195552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:42:48.894640 2195552 kubeadm.go:392] StartCluster: {Name:flannel-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-708138 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:42:48.894779 2195552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:42:48.894890 2195552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:42:48.940887 2195552 cri.go:89] found id: ""
	I0120 16:42:48.940984 2195552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:42:48.952531 2195552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:42:48.963786 2195552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:42:48.974250 2195552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:42:48.974278 2195552 kubeadm.go:157] found existing configuration files:
	
	I0120 16:42:48.974338 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:42:48.984449 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:42:48.984527 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:42:48.995330 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:42:49.006034 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:42:49.006104 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:42:49.017110 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.027295 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:42:49.027368 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:42:49.040812 2195552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:42:49.051290 2195552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:42:49.051377 2195552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:42:49.066485 2195552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:42:49.134741 2195552 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:42:49.134946 2195552 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:42:49.249160 2195552 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:42:49.249323 2195552 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:42:49.249481 2195552 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:42:49.263796 2195552 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:42:47.358916 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:47.359566 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:47.359596 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:47.359554 2197289 retry.go:31] will retry after 1.461778861s: waiting for domain to come up
	I0120 16:42:48.823504 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:48.824115 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:48.824147 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:48.824084 2197289 retry.go:31] will retry after 1.249679155s: waiting for domain to come up
	I0120 16:42:50.075499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:50.076082 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:50.076116 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:50.076030 2197289 retry.go:31] will retry after 2.28026185s: waiting for domain to come up
	I0120 16:42:49.298061 2195552 out.go:235]   - Generating certificates and keys ...
	I0120 16:42:49.298271 2195552 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:42:49.298360 2195552 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:42:49.326405 2195552 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:42:49.603739 2195552 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:42:50.017706 2195552 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:42:50.212861 2195552 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:42:50.332005 2195552 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:42:50.332365 2195552 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.576915 2195552 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:42:50.577225 2195552 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-708138 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0120 16:42:50.922540 2195552 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:42:51.148072 2195552 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:42:51.262833 2195552 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:42:51.262930 2195552 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:42:51.404906 2195552 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:42:51.648067 2195552 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:42:51.759756 2195552 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:42:51.962741 2195552 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:42:52.453700 2195552 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:42:52.456041 2195552 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:42:52.459366 2195552 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:42:52.461278 2195552 out.go:235]   - Booting up control plane ...
	I0120 16:42:52.461391 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:42:52.461507 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:42:52.461588 2195552 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:42:52.484769 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:42:52.493367 2195552 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:42:52.493452 2195552 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:42:52.663075 2195552 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:42:52.664096 2195552 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:42:52.357734 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:52.358411 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:52.358493 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:52.358391 2197289 retry.go:31] will retry after 2.232137635s: waiting for domain to come up
	I0120 16:42:54.592598 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:54.593256 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:54.593288 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:54.593159 2197289 retry.go:31] will retry after 3.499879042s: waiting for domain to come up
	I0120 16:42:54.164599 2195552 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501261507s
	I0120 16:42:54.164721 2195552 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:42:59.162803 2195552 kubeadm.go:310] [api-check] The API server is healthy after 5.001059076s
	I0120 16:42:59.182087 2195552 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:42:59.202928 2195552 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:42:59.251598 2195552 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:42:59.251870 2195552 kubeadm.go:310] [mark-control-plane] Marking the node flannel-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:42:59.267327 2195552 kubeadm.go:310] [bootstrap-token] Using token: 0uevl5.w9rl7hild7q3qmvj
	I0120 16:42:59.268924 2195552 out.go:235]   - Configuring RBAC rules ...
	I0120 16:42:59.269076 2195552 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:42:59.276545 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:42:59.290974 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:42:59.296882 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:42:59.304061 2195552 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:42:59.311324 2195552 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:42:59.571703 2195552 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:42:59.999391 2195552 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:00.569884 2195552 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:00.572667 2195552 kubeadm.go:310] 
	I0120 16:43:00.572758 2195552 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:00.572768 2195552 kubeadm.go:310] 
	I0120 16:43:00.572931 2195552 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:00.572966 2195552 kubeadm.go:310] 
	I0120 16:43:00.573016 2195552 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:00.573090 2195552 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:00.573154 2195552 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:00.573163 2195552 kubeadm.go:310] 
	I0120 16:43:00.573251 2195552 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:00.573265 2195552 kubeadm.go:310] 
	I0120 16:43:00.573345 2195552 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:00.573378 2195552 kubeadm.go:310] 
	I0120 16:43:00.573475 2195552 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:00.573604 2195552 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:00.573697 2195552 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:00.573707 2195552 kubeadm.go:310] 
	I0120 16:43:00.573823 2195552 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:00.573923 2195552 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:00.573930 2195552 kubeadm.go:310] 
	I0120 16:43:00.574048 2195552 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574201 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:00.574235 2195552 kubeadm.go:310] 	--control-plane 
	I0120 16:43:00.574258 2195552 kubeadm.go:310] 
	I0120 16:43:00.574400 2195552 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:00.574432 2195552 kubeadm.go:310] 
	I0120 16:43:00.574590 2195552 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0uevl5.w9rl7hild7q3qmvj \
	I0120 16:43:00.574795 2195552 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:00.575007 2195552 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:00.575049 2195552 cni.go:84] Creating CNI manager for "flannel"
	I0120 16:43:00.576721 2195552 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0120 16:42:58.094988 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:42:58.095844 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:42:58.095874 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:42:58.095719 2197289 retry.go:31] will retry after 4.384762232s: waiting for domain to come up
	I0120 16:43:00.577996 2195552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 16:43:00.584504 2195552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 16:43:00.584526 2195552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0120 16:43:00.610147 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 16:43:01.108354 2195552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:01.108472 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.108474 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=flannel-708138 minikube.k8s.io/primary=true
	I0120 16:43:01.153107 2195552 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:01.323188 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:01.823589 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.324096 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:02.823844 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.323872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:03.823872 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.323604 2195552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:04.428740 2195552 kubeadm.go:1113] duration metric: took 3.320348756s to wait for elevateKubeSystemPrivileges
	I0120 16:43:04.428788 2195552 kubeadm.go:394] duration metric: took 15.534153444s to StartCluster
	I0120 16:43:04.428816 2195552 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.428921 2195552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:04.430989 2195552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:04.431307 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:04.431303 2195552 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:04.431336 2195552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:04.431519 2195552 addons.go:69] Setting storage-provisioner=true in profile "flannel-708138"
	I0120 16:43:04.431529 2195552 config.go:182] Loaded profile config "flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:04.431538 2195552 addons.go:238] Setting addon storage-provisioner=true in "flannel-708138"
	I0120 16:43:04.431579 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.431586 2195552 addons.go:69] Setting default-storageclass=true in profile "flannel-708138"
	I0120 16:43:04.431621 2195552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-708138"
	I0120 16:43:04.432070 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432112 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.432118 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.432151 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.435123 2195552 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:04.436595 2195552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:04.449431 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0120 16:43:04.449469 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0120 16:43:04.450031 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450065 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.450628 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450657 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.450772 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.450798 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.451074 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451199 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.451435 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.451674 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.451723 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.455136 2195552 addons.go:238] Setting addon default-storageclass=true in "flannel-708138"
	I0120 16:43:04.455176 2195552 host.go:66] Checking if "flannel-708138" exists ...
	I0120 16:43:04.455442 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.455480 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.468668 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0120 16:43:04.469232 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.469794 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.469810 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.470234 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.470456 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.471939 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I0120 16:43:04.472364 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.472464 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.472904 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.472933 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.473322 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.473822 2195552 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:04.473860 2195552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:04.474444 2195552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:04.475956 2195552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.475976 2195552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:04.475998 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.479414 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.479895 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.479928 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.480056 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.480246 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.480426 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.480560 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.491228 2195552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0120 16:43:04.491682 2195552 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:04.492333 2195552 main.go:141] libmachine: Using API Version  1
	I0120 16:43:04.492364 2195552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:04.492740 2195552 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:04.492924 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetState
	I0120 16:43:04.494696 2195552 main.go:141] libmachine: (flannel-708138) Calling .DriverName
	I0120 16:43:04.494958 2195552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:04.494975 2195552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:04.494997 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHHostname
	I0120 16:43:04.497642 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498099 2195552 main.go:141] libmachine: (flannel-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:a2:3d", ip: ""} in network mk-flannel-708138: {Iface:virbr1 ExpiryTime:2025-01-20 17:42:30 +0000 UTC Type:0 Mac:52:54:00:ff:a2:3d Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:flannel-708138 Clientid:01:52:54:00:ff:a2:3d}
	I0120 16:43:04.498131 2195552 main.go:141] libmachine: (flannel-708138) DBG | domain flannel-708138 has defined IP address 192.168.39.206 and MAC address 52:54:00:ff:a2:3d in network mk-flannel-708138
	I0120 16:43:04.498258 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHPort
	I0120 16:43:04.498486 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHKeyPath
	I0120 16:43:04.498649 2195552 main.go:141] libmachine: (flannel-708138) Calling .GetSSHUsername
	I0120 16:43:04.498811 2195552 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/flannel-708138/id_rsa Username:docker}
	I0120 16:43:04.741102 2195552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:04.741114 2195552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:04.889912 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:04.966678 2195552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:05.319499 2195552 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:05.321208 2195552 node_ready.go:35] waiting up to 15m0s for node "flannel-708138" to be "Ready" ...
	I0120 16:43:05.578109 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578136 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578257 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578282 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.578512 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.578539 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.578550 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.578558 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580280 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580297 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580296 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580313 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580323 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.580333 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.580340 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.580334 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580582 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.580586 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.580600 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.591009 2195552 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:05.591045 2195552 main.go:141] libmachine: (flannel-708138) Calling .Close
	I0120 16:43:05.591353 2195552 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:05.591368 2195552 main.go:141] libmachine: (flannel-708138) DBG | Closing plugin on server side
	I0120 16:43:05.591377 2195552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:05.593936 2195552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:02.482109 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:02.482647 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find current IP address of domain bridge-708138 in network mk-bridge-708138
	I0120 16:43:02.482679 2197206 main.go:141] libmachine: (bridge-708138) DBG | I0120 16:43:02.482582 2197289 retry.go:31] will retry after 5.49113903s: waiting for domain to come up
	I0120 16:43:05.595175 2195552 addons.go:514] duration metric: took 1.163842267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:05.824160 2195552 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-708138" context rescaled to 1 replicas
	I0120 16:43:07.325793 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:07.975570 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976154 2197206 main.go:141] libmachine: (bridge-708138) found domain IP: 192.168.72.88
	I0120 16:43:07.976182 2197206 main.go:141] libmachine: (bridge-708138) reserving static IP address...
	I0120 16:43:07.976192 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has current primary IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:07.976560 2197206 main.go:141] libmachine: (bridge-708138) DBG | unable to find host DHCP lease matching {name: "bridge-708138", mac: "52:54:00:d9:89:1c", ip: "192.168.72.88"} in network mk-bridge-708138
	I0120 16:43:08.062745 2197206 main.go:141] libmachine: (bridge-708138) reserved static IP address 192.168.72.88 for domain bridge-708138
	I0120 16:43:08.062784 2197206 main.go:141] libmachine: (bridge-708138) DBG | Getting to WaitForSSH function...
	I0120 16:43:08.062792 2197206 main.go:141] libmachine: (bridge-708138) waiting for SSH...
	I0120 16:43:08.065921 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066430 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.066483 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.066582 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH client type: external
	I0120 16:43:08.066651 2197206 main.go:141] libmachine: (bridge-708138) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa (-rw-------)
	I0120 16:43:08.066681 2197206 main.go:141] libmachine: (bridge-708138) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 16:43:08.066697 2197206 main.go:141] libmachine: (bridge-708138) DBG | About to run SSH command:
	I0120 16:43:08.066706 2197206 main.go:141] libmachine: (bridge-708138) DBG | exit 0
	I0120 16:43:08.195445 2197206 main.go:141] libmachine: (bridge-708138) DBG | SSH cmd err, output: <nil>: 
	I0120 16:43:08.195759 2197206 main.go:141] libmachine: (bridge-708138) KVM machine creation complete
	I0120 16:43:08.196070 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:08.196739 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197017 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:08.197188 2197206 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 16:43:08.197231 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:08.198995 2197206 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 16:43:08.199011 2197206 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 16:43:08.199017 2197206 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 16:43:08.199022 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.201755 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202123 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.202152 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.202261 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.202473 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202647 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.202790 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.202975 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.203249 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.203266 2197206 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 16:43:08.310341 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.310368 2197206 main.go:141] libmachine: Detecting the provisioner...
	I0120 16:43:08.310376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.313249 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313593 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.313617 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.313753 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.313976 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314162 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.314330 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.314548 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.314788 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.314803 2197206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 16:43:08.424018 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 16:43:08.424146 2197206 main.go:141] libmachine: found compatible host: buildroot
	I0120 16:43:08.424160 2197206 main.go:141] libmachine: Provisioning with buildroot...
	I0120 16:43:08.424174 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424466 2197206 buildroot.go:166] provisioning hostname "bridge-708138"
	I0120 16:43:08.424517 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.424725 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.427305 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427686 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.427715 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.427863 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.428207 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428411 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.428534 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.428719 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.428965 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.428985 2197206 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-708138 && echo "bridge-708138" | sudo tee /etc/hostname
	I0120 16:43:08.551195 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-708138
	
	I0120 16:43:08.551238 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.554014 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554390 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.554423 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.554574 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.554806 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.554968 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.555124 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.555257 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.555452 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.555467 2197206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-708138' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-708138/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-708138' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 16:43:08.673244 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 16:43:08.673286 2197206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
	I0120 16:43:08.673324 2197206 buildroot.go:174] setting up certificates
	I0120 16:43:08.673340 2197206 provision.go:84] configureAuth start
	I0120 16:43:08.673357 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetMachineName
	I0120 16:43:08.673699 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:08.676632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.676968 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.677000 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.677175 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.679290 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679603 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.679632 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.679786 2197206 provision.go:143] copyHostCerts
	I0120 16:43:08.679847 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem, removing ...
	I0120 16:43:08.679859 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem
	I0120 16:43:08.679915 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
	I0120 16:43:08.680004 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem, removing ...
	I0120 16:43:08.680019 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem
	I0120 16:43:08.680038 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
	I0120 16:43:08.680087 2197206 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem, removing ...
	I0120 16:43:08.680094 2197206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem
	I0120 16:43:08.680113 2197206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
	I0120 16:43:08.680159 2197206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.bridge-708138 san=[127.0.0.1 192.168.72.88 bridge-708138 localhost minikube]
	I0120 16:43:08.795436 2197206 provision.go:177] copyRemoteCerts
	I0120 16:43:08.795532 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 16:43:08.795567 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.798390 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798751 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.798784 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.798951 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.799157 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.799316 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.799470 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:08.890925 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 16:43:08.918903 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 16:43:08.946784 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 16:43:08.972830 2197206 provision.go:87] duration metric: took 299.472419ms to configureAuth
	I0120 16:43:08.972860 2197206 buildroot.go:189] setting minikube options for container-runtime
	I0120 16:43:08.973105 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:08.973209 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:08.976107 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976516 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:08.976547 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:08.976758 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:08.977001 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977195 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:08.977372 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:08.977552 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:08.977793 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:08.977818 2197206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 16:43:09.218079 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 16:43:09.218113 2197206 main.go:141] libmachine: Checking connection to Docker...
	I0120 16:43:09.218121 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetURL
	I0120 16:43:09.219440 2197206 main.go:141] libmachine: (bridge-708138) DBG | using libvirt version 6000000
	I0120 16:43:09.221519 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.221903 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.221936 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.222152 2197206 main.go:141] libmachine: Docker is up and running!
	I0120 16:43:09.222170 2197206 main.go:141] libmachine: Reticulating splines...
	I0120 16:43:09.222180 2197206 client.go:171] duration metric: took 27.720355771s to LocalClient.Create
	I0120 16:43:09.222209 2197206 start.go:167] duration metric: took 27.720430833s to libmachine.API.Create "bridge-708138"
	I0120 16:43:09.222223 2197206 start.go:293] postStartSetup for "bridge-708138" (driver="kvm2")
	I0120 16:43:09.222236 2197206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 16:43:09.222269 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.222508 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 16:43:09.222546 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.224660 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.224997 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.225028 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.225135 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.225326 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.225514 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.225714 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.311781 2197206 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 16:43:09.316438 2197206 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 16:43:09.316477 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
	I0120 16:43:09.316558 2197206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
	I0120 16:43:09.316649 2197206 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem -> 21367492.pem in /etc/ssl/certs
	I0120 16:43:09.316749 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 16:43:09.329422 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:09.358995 2197206 start.go:296] duration metric: took 136.756187ms for postStartSetup
	I0120 16:43:09.359076 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetConfigRaw
	I0120 16:43:09.359720 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.362855 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363228 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.363298 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.363532 2197206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/config.json ...
	I0120 16:43:09.363729 2197206 start.go:128] duration metric: took 27.883644045s to createHost
	I0120 16:43:09.363752 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.367222 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367703 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.367728 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.367889 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.368112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368248 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.368376 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.368536 2197206 main.go:141] libmachine: Using SSH client type: native
	I0120 16:43:09.368750 2197206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0120 16:43:09.368769 2197206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 16:43:09.476152 2197206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737391389.460433936
	
	I0120 16:43:09.476186 2197206 fix.go:216] guest clock: 1737391389.460433936
	I0120 16:43:09.476208 2197206 fix.go:229] Guest: 2025-01-20 16:43:09.460433936 +0000 UTC Remote: 2025-01-20 16:43:09.363740668 +0000 UTC m=+37.396826539 (delta=96.693268ms)
	I0120 16:43:09.476239 2197206 fix.go:200] guest clock delta is within tolerance: 96.693268ms
	I0120 16:43:09.476250 2197206 start.go:83] releasing machines lock for "bridge-708138", held for 27.996351856s
	I0120 16:43:09.476280 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.476552 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:09.479629 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480100 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.480130 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.480293 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480785 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.480979 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:09.481115 2197206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 16:43:09.481163 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.481228 2197206 ssh_runner.go:195] Run: cat /version.json
	I0120 16:43:09.481255 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:09.484029 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484438 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.484465 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484487 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.484809 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.484960 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:09.485013 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485036 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:09.485249 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:09.485266 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485476 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:09.485524 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.485634 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:09.485801 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:09.572916 2197206 ssh_runner.go:195] Run: systemctl --version
	I0120 16:43:09.609198 2197206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 16:43:09.772783 2197206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 16:43:09.779241 2197206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 16:43:09.779347 2197206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 16:43:09.796029 2197206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 16:43:09.796066 2197206 start.go:495] detecting cgroup driver to use...
	I0120 16:43:09.796162 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 16:43:09.813742 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 16:43:09.828707 2197206 docker.go:217] disabling cri-docker service (if available) ...
	I0120 16:43:09.828775 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 16:43:09.843309 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 16:43:09.858188 2197206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 16:43:09.984031 2197206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 16:43:10.146631 2197206 docker.go:233] disabling docker service ...
	I0120 16:43:10.146719 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 16:43:10.162952 2197206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 16:43:10.176639 2197206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 16:43:10.313460 2197206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 16:43:10.449221 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 16:43:10.464620 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 16:43:10.484192 2197206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 16:43:10.484261 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.496517 2197206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 16:43:10.496623 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.508222 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.519634 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.531216 2197206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 16:43:10.543258 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.557639 2197206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.580753 2197206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 16:43:10.592908 2197206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 16:43:10.604469 2197206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 16:43:10.604557 2197206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 16:43:10.619774 2197206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 16:43:10.630917 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:10.771445 2197206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 16:43:10.858491 2197206 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 16:43:10.858594 2197206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 16:43:10.863619 2197206 start.go:563] Will wait 60s for crictl version
	I0120 16:43:10.863674 2197206 ssh_runner.go:195] Run: which crictl
	I0120 16:43:10.867761 2197206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 16:43:10.910094 2197206 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 16:43:10.910202 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.946319 2197206 ssh_runner.go:195] Run: crio --version
	I0120 16:43:10.984785 2197206 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 16:43:10.986112 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetIP
	I0120 16:43:10.989054 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989473 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:10.989499 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:10.989835 2197206 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 16:43:10.994705 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:11.009975 2197206 kubeadm.go:883] updating cluster {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 16:43:11.010149 2197206 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 16:43:11.010226 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:11.045673 2197206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 16:43:11.045764 2197206 ssh_runner.go:195] Run: which lz4
	I0120 16:43:11.050364 2197206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 16:43:11.054940 2197206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 16:43:11.054978 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 16:43:09.824714 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:11.826450 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:12.645258 2197206 crio.go:462] duration metric: took 1.594939639s to copy over tarball
	I0120 16:43:12.645365 2197206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 16:43:15.071062 2197206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.425659919s)
	I0120 16:43:15.071103 2197206 crio.go:469] duration metric: took 2.425799615s to extract the tarball
	I0120 16:43:15.071114 2197206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 16:43:15.111615 2197206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 16:43:15.156900 2197206 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 16:43:15.156926 2197206 cache_images.go:84] Images are preloaded, skipping loading
	I0120 16:43:15.156936 2197206 kubeadm.go:934] updating node { 192.168.72.88 8443 v1.32.0 crio true true} ...
	I0120 16:43:15.157067 2197206 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-708138 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0120 16:43:15.157162 2197206 ssh_runner.go:195] Run: crio config
	I0120 16:43:15.208647 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:15.208676 2197206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 16:43:15.208699 2197206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.88 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-708138 NodeName:bridge-708138 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 16:43:15.208830 2197206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-708138"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.88"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.88"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 16:43:15.208898 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 16:43:15.220035 2197206 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 16:43:15.220130 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 16:43:15.230274 2197206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0120 16:43:15.250389 2197206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 16:43:15.268846 2197206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0120 16:43:15.288060 2197206 ssh_runner.go:195] Run: grep 192.168.72.88	control-plane.minikube.internal$ /etc/hosts
	I0120 16:43:15.293094 2197206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 16:43:15.307503 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:15.448214 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:15.471118 2197206 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138 for IP: 192.168.72.88
	I0120 16:43:15.471147 2197206 certs.go:194] generating shared ca certs ...
	I0120 16:43:15.471165 2197206 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.471331 2197206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
	I0120 16:43:15.471386 2197206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
	I0120 16:43:15.471396 2197206 certs.go:256] generating profile certs ...
	I0120 16:43:15.471452 2197206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key
	I0120 16:43:15.471479 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt with IP's: []
	I0120 16:43:15.891023 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt ...
	I0120 16:43:15.891061 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.crt: {Name:mk81b32ec31af688b6d4652fb2789449b6bb041c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891285 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key ...
	I0120 16:43:15.891309 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/client.key: {Name:mk3bbf7430f7b04957959e169acea17d8973d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:15.891454 2197206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5
	I0120 16:43:15.891482 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.88]
	I0120 16:43:16.021148 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 ...
	I0120 16:43:16.021182 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5: {Name:mk56a312fc5ec12eb4e10626dc4fa18ded44019d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021396 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 ...
	I0120 16:43:16.021416 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5: {Name:mk71d4978edbd5634298d6328a82e57dfdcb21df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.021521 2197206 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt
	I0120 16:43:16.021621 2197206 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key.c4d58ee5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key
	I0120 16:43:16.021684 2197206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key
	I0120 16:43:16.021701 2197206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt with IP's: []
	I0120 16:43:16.200719 2197206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt ...
	I0120 16:43:16.200752 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt: {Name:mk1b93fabdfdbe923ba4bd4bdcee8aa4ee4eb6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.200944 2197206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key ...
	I0120 16:43:16.200964 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key: {Name:mk47f0abf782077fe358b23835f1924f393006e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:16.201182 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem (1338 bytes)
	W0120 16:43:16.201225 2197206 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749_empty.pem, impossibly tiny 0 bytes
	I0120 16:43:16.201236 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 16:43:16.201260 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
	I0120 16:43:16.201283 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
	I0120 16:43:16.201303 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
	I0120 16:43:16.201340 2197206 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem (1708 bytes)
	I0120 16:43:16.201918 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 16:43:16.237391 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 16:43:16.277743 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 16:43:16.306735 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 16:43:16.334792 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 16:43:16.363266 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 16:43:16.391982 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 16:43:16.419674 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/bridge-708138/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 16:43:16.446802 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/ssl/certs/21367492.pem --> /usr/share/ca-certificates/21367492.pem (1708 bytes)
	I0120 16:43:16.474961 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 16:43:16.503997 2197206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/2136749.pem --> /usr/share/ca-certificates/2136749.pem (1338 bytes)
	I0120 16:43:16.530572 2197206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 16:43:16.548971 2197206 ssh_runner.go:195] Run: openssl version
	I0120 16:43:16.555413 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2136749.pem && ln -fs /usr/share/ca-certificates/2136749.pem /etc/ssl/certs/2136749.pem"
	I0120 16:43:16.567053 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571897 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 15:16 /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.571974 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2136749.pem
	I0120 16:43:16.578136 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2136749.pem /etc/ssl/certs/51391683.0"
	I0120 16:43:16.590223 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21367492.pem && ln -fs /usr/share/ca-certificates/21367492.pem /etc/ssl/certs/21367492.pem"
	I0120 16:43:16.602984 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.607971 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 15:16 /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.608083 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21367492.pem
	I0120 16:43:16.614296 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21367492.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 16:43:16.626015 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 16:43:16.639800 2197206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645006 2197206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.645084 2197206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 16:43:16.651449 2197206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 16:43:16.663469 2197206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 16:43:16.668102 2197206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 16:43:16.668167 2197206 kubeadm.go:392] StartCluster: {Name:bridge-708138 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-708138 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:43:16.668285 2197206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 16:43:16.668340 2197206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 16:43:16.706702 2197206 cri.go:89] found id: ""
	I0120 16:43:16.706804 2197206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 16:43:16.718586 2197206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 16:43:16.729343 2197206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 16:43:16.740887 2197206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 16:43:16.740911 2197206 kubeadm.go:157] found existing configuration files:
	
	I0120 16:43:16.740975 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 16:43:16.753083 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 16:43:16.753151 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 16:43:16.764580 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 16:43:16.776660 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 16:43:16.776739 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 16:43:16.787809 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.800110 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 16:43:16.800203 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 16:43:16.811124 2197206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 16:43:16.822087 2197206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 16:43:16.822160 2197206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 16:43:16.834957 2197206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 16:43:16.902421 2197206 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 16:43:16.902553 2197206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 16:43:17.042455 2197206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 16:43:17.042629 2197206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 16:43:17.042798 2197206 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 16:43:17.053323 2197206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 16:43:14.324786 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:16.325269 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:18.393718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:17.321797 2197206 out.go:235]   - Generating certificates and keys ...
	I0120 16:43:17.321934 2197206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 16:43:17.322011 2197206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 16:43:17.402336 2197206 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 16:43:17.536347 2197206 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 16:43:17.688442 2197206 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 16:43:17.858918 2197206 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 16:43:18.183422 2197206 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 16:43:18.183672 2197206 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.264748 2197206 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 16:43:18.264953 2197206 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-708138 localhost] and IPs [192.168.72.88 127.0.0.1 ::1]
	I0120 16:43:18.426217 2197206 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 16:43:18.686494 2197206 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 16:43:18.828457 2197206 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 16:43:18.828691 2197206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 16:43:18.955301 2197206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 16:43:19.046031 2197206 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 16:43:19.231335 2197206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 16:43:19.447816 2197206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 16:43:19.619053 2197206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 16:43:19.619607 2197206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 16:43:19.622288 2197206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 16:43:19.624157 2197206 out.go:235]   - Booting up control plane ...
	I0120 16:43:19.624275 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 16:43:19.624380 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 16:43:19.624476 2197206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 16:43:19.646471 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 16:43:19.657842 2197206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 16:43:19.657931 2197206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 16:43:19.804616 2197206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 16:43:19.804743 2197206 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 16:43:20.315932 2197206 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.124273ms
	I0120 16:43:20.316084 2197206 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 16:43:20.825198 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:23.325444 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:25.818525 2197206 kubeadm.go:310] [api-check] The API server is healthy after 5.503297043s
	I0120 16:43:25.835132 2197206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 16:43:25.869802 2197206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 16:43:25.925988 2197206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 16:43:25.926216 2197206 kubeadm.go:310] [mark-control-plane] Marking the node bridge-708138 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 16:43:25.952439 2197206 kubeadm.go:310] [bootstrap-token] Using token: xw20yr.9359ar4c28065art
	I0120 16:43:25.954040 2197206 out.go:235]   - Configuring RBAC rules ...
	I0120 16:43:25.954189 2197206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 16:43:25.971234 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 16:43:25.984672 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 16:43:25.992321 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 16:43:25.998352 2197206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 16:43:26.005011 2197206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 16:43:26.224365 2197206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 16:43:26.676446 2197206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 16:43:27.225715 2197206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 16:43:27.229867 2197206 kubeadm.go:310] 
	I0120 16:43:27.229970 2197206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 16:43:27.229988 2197206 kubeadm.go:310] 
	I0120 16:43:27.230128 2197206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 16:43:27.230149 2197206 kubeadm.go:310] 
	I0120 16:43:27.230187 2197206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 16:43:27.230280 2197206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 16:43:27.230366 2197206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 16:43:27.230377 2197206 kubeadm.go:310] 
	I0120 16:43:27.230453 2197206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 16:43:27.230469 2197206 kubeadm.go:310] 
	I0120 16:43:27.230530 2197206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 16:43:27.230540 2197206 kubeadm.go:310] 
	I0120 16:43:27.230633 2197206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 16:43:27.230741 2197206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 16:43:27.230840 2197206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 16:43:27.230850 2197206 kubeadm.go:310] 
	I0120 16:43:27.230947 2197206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 16:43:27.231060 2197206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 16:43:27.231069 2197206 kubeadm.go:310] 
	I0120 16:43:27.231168 2197206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231293 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
	I0120 16:43:27.231325 2197206 kubeadm.go:310] 	--control-plane 
	I0120 16:43:27.231336 2197206 kubeadm.go:310] 
	I0120 16:43:27.231463 2197206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 16:43:27.231479 2197206 kubeadm.go:310] 
	I0120 16:43:27.231554 2197206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xw20yr.9359ar4c28065art \
	I0120 16:43:27.231702 2197206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 
	I0120 16:43:27.232406 2197206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 16:43:27.232502 2197206 cni.go:84] Creating CNI manager for "bridge"
	I0120 16:43:27.235020 2197206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 16:43:25.325819 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.325884 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:27.236381 2197206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 16:43:27.251582 2197206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 16:43:27.277986 2197206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 16:43:27.278066 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.278083 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-708138 minikube.k8s.io/updated_at=2025_01_20T16_43_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=bridge-708138 minikube.k8s.io/primary=true
	I0120 16:43:27.318132 2197206 ops.go:34] apiserver oom_adj: -16
	I0120 16:43:27.454138 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:27.955129 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.454750 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:28.954684 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.454513 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:29.955223 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.455022 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:30.954199 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.454428 2197206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 16:43:31.606545 2197206 kubeadm.go:1113] duration metric: took 4.328571416s to wait for elevateKubeSystemPrivileges
	I0120 16:43:31.606592 2197206 kubeadm.go:394] duration metric: took 14.938431891s to StartCluster
	I0120 16:43:31.606633 2197206 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.606774 2197206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:43:31.609525 2197206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:43:31.609884 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 16:43:31.609885 2197206 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 16:43:31.609984 2197206 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 16:43:31.610121 2197206 addons.go:69] Setting storage-provisioner=true in profile "bridge-708138"
	I0120 16:43:31.610144 2197206 addons.go:238] Setting addon storage-provisioner=true in "bridge-708138"
	I0120 16:43:31.610141 2197206 addons.go:69] Setting default-storageclass=true in profile "bridge-708138"
	I0120 16:43:31.610154 2197206 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:43:31.610166 2197206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-708138"
	I0120 16:43:31.610193 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.610720 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610774 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.610788 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.610837 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.611842 2197206 out.go:177] * Verifying Kubernetes components...
	I0120 16:43:31.613454 2197206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 16:43:31.628647 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I0120 16:43:31.628881 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0120 16:43:31.629232 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629383 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.629930 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.629952 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630016 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.630040 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.630423 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.630687 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.630689 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.631256 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.631304 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.634974 2197206 addons.go:238] Setting addon default-storageclass=true in "bridge-708138"
	I0120 16:43:31.635030 2197206 host.go:66] Checking if "bridge-708138" exists ...
	I0120 16:43:31.635335 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.635387 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.649021 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0120 16:43:31.649452 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.651254 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.651285 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.651867 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.652059 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.653726 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0120 16:43:31.654126 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.654296 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.654915 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.654928 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.655380 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.655949 2197206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20109-2129584/.minikube/bin/docker-machine-driver-kvm2
	I0120 16:43:31.656008 2197206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:43:31.656646 2197206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 16:43:31.658066 2197206 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:31.658082 2197206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 16:43:31.658099 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.661450 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.661729 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.661760 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.662030 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.662235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.662397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.662550 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.676457 2197206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0120 16:43:31.677019 2197206 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:43:31.677756 2197206 main.go:141] libmachine: Using API Version  1
	I0120 16:43:31.677789 2197206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:43:31.678148 2197206 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:43:31.678385 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetState
	I0120 16:43:31.680320 2197206 main.go:141] libmachine: (bridge-708138) Calling .DriverName
	I0120 16:43:31.680609 2197206 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:31.680630 2197206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 16:43:31.680655 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHHostname
	I0120 16:43:31.683331 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.683716 2197206 main.go:141] libmachine: (bridge-708138) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:89:1c", ip: ""} in network mk-bridge-708138: {Iface:virbr3 ExpiryTime:2025-01-20 17:42:58 +0000 UTC Type:0 Mac:52:54:00:d9:89:1c Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:bridge-708138 Clientid:01:52:54:00:d9:89:1c}
	I0120 16:43:31.683795 2197206 main.go:141] libmachine: (bridge-708138) DBG | domain bridge-708138 has defined IP address 192.168.72.88 and MAC address 52:54:00:d9:89:1c in network mk-bridge-708138
	I0120 16:43:31.684017 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHPort
	I0120 16:43:31.684235 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHKeyPath
	I0120 16:43:31.684397 2197206 main.go:141] libmachine: (bridge-708138) Calling .GetSSHUsername
	I0120 16:43:31.684535 2197206 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/bridge-708138/id_rsa Username:docker}
	I0120 16:43:31.936634 2197206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 16:43:31.936728 2197206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 16:43:31.976057 2197206 node_ready.go:35] waiting up to 15m0s for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985329 2197206 node_ready.go:49] node "bridge-708138" has status "Ready":"True"
	I0120 16:43:31.985356 2197206 node_ready.go:38] duration metric: took 9.257739ms for node "bridge-708138" to be "Ready" ...
	I0120 16:43:31.985368 2197206 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:31.995641 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:32.055183 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 16:43:32.153090 2197206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 16:43:32.568616 2197206 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0120 16:43:32.853746 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853781 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.853900 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.853924 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854124 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854175 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854180 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.854222 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.854226 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.854268 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854280 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854197 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.854356 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.854138 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856214 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856226 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856289 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.856306 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.856355 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.856368 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.874144 2197206 main.go:141] libmachine: Making call to close driver server
	I0120 16:43:32.874173 2197206 main.go:141] libmachine: (bridge-708138) Calling .Close
	I0120 16:43:32.874543 2197206 main.go:141] libmachine: Successfully made call to close driver server
	I0120 16:43:32.874584 2197206 main.go:141] libmachine: (bridge-708138) DBG | Closing plugin on server side
	I0120 16:43:32.874595 2197206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 16:43:32.876336 2197206 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 16:43:29.825256 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:31.826538 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:32.877697 2197206 addons.go:514] duration metric: took 1.267734381s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 16:43:33.076155 2197206 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-708138" context rescaled to 1 replicas
	I0120 16:43:33.998522 2197206 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998557 2197206 pod_ready.go:82] duration metric: took 2.002870414s for pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace to be "Ready" ...
	E0120 16:43:33.998571 2197206 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-284cz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-284cz" not found
	I0120 16:43:33.998581 2197206 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:36.006241 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:34.324997 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:36.326016 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.825101 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:38.504747 2197206 pod_ready.go:103] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"False"
	I0120 16:43:40.005785 2197206 pod_ready.go:93] pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.005813 2197206 pod_ready.go:82] duration metric: took 6.007222936s for pod "coredns-668d6bf9bc-6ztbb" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.005823 2197206 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011217 2197206 pod_ready.go:93] pod "etcd-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.011239 2197206 pod_ready.go:82] duration metric: took 5.409716ms for pod "etcd-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.011248 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016613 2197206 pod_ready.go:93] pod "kube-apiserver-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.016634 2197206 pod_ready.go:82] duration metric: took 5.379045ms for pod "kube-apiserver-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.016643 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021777 2197206 pod_ready.go:93] pod "kube-controller-manager-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.021806 2197206 pod_ready.go:82] duration metric: took 5.155108ms for pod "kube-controller-manager-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.021818 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028255 2197206 pod_ready.go:93] pod "kube-proxy-gz7x6" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.028280 2197206 pod_ready.go:82] duration metric: took 6.454274ms for pod "kube-proxy-gz7x6" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.028289 2197206 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403358 2197206 pod_ready.go:93] pod "kube-scheduler-bridge-708138" in "kube-system" namespace has status "Ready":"True"
	I0120 16:43:40.403389 2197206 pod_ready.go:82] duration metric: took 375.092058ms for pod "kube-scheduler-bridge-708138" in "kube-system" namespace to be "Ready" ...
	I0120 16:43:40.403398 2197206 pod_ready.go:39] duration metric: took 8.418019424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 16:43:40.403415 2197206 api_server.go:52] waiting for apiserver process to appear ...
	I0120 16:43:40.403470 2197206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:43:40.420906 2197206 api_server.go:72] duration metric: took 8.810975265s to wait for apiserver process to appear ...
	I0120 16:43:40.420936 2197206 api_server.go:88] waiting for apiserver healthz status ...
	I0120 16:43:40.420959 2197206 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0120 16:43:40.427501 2197206 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0120 16:43:40.428593 2197206 api_server.go:141] control plane version: v1.32.0
	I0120 16:43:40.428625 2197206 api_server.go:131] duration metric: took 7.680154ms to wait for apiserver health ...
	I0120 16:43:40.428636 2197206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 16:43:40.607673 2197206 system_pods.go:59] 7 kube-system pods found
	I0120 16:43:40.607711 2197206 system_pods.go:61] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:40.607716 2197206 system_pods.go:61] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:40.607719 2197206 system_pods.go:61] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:40.607723 2197206 system_pods.go:61] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:40.607727 2197206 system_pods.go:61] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:40.607730 2197206 system_pods.go:61] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:40.607733 2197206 system_pods.go:61] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:40.607740 2197206 system_pods.go:74] duration metric: took 179.093225ms to wait for pod list to return data ...
	I0120 16:43:40.607747 2197206 default_sa.go:34] waiting for default service account to be created ...
	I0120 16:43:40.803775 2197206 default_sa.go:45] found service account: "default"
	I0120 16:43:40.803805 2197206 default_sa.go:55] duration metric: took 196.051704ms for default service account to be created ...
	I0120 16:43:40.803813 2197206 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 16:43:41.006405 2197206 system_pods.go:87] 7 kube-system pods found
	I0120 16:43:41.203196 2197206 system_pods.go:105] "coredns-668d6bf9bc-6ztbb" [39abe601-d0fa-4246-b8ab-6a9f4353c207] Running
	I0120 16:43:41.203220 2197206 system_pods.go:105] "etcd-bridge-708138" [73b4b429-aa20-47b8-bd96-bbe96a60b0a5] Running
	I0120 16:43:41.203225 2197206 system_pods.go:105] "kube-apiserver-bridge-708138" [bb3e6a95-e43a-4b98-a1bd-ea15b532e6d5] Running
	I0120 16:43:41.203230 2197206 system_pods.go:105] "kube-controller-manager-bridge-708138" [818c702e-fca4-491e-8677-6fe699c01561] Running
	I0120 16:43:41.203234 2197206 system_pods.go:105] "kube-proxy-gz7x6" [927ee7ed-4e8e-48de-b94c-c91208b52cca] Running
	I0120 16:43:41.203238 2197206 system_pods.go:105] "kube-scheduler-bridge-708138" [518ce086-80f8-4fb1-b1b2-faf5800915d5] Running
	I0120 16:43:41.203243 2197206 system_pods.go:105] "storage-provisioner" [7057ca4d-ad71-42c2-810a-9a33e8b409de] Running
	I0120 16:43:41.203251 2197206 system_pods.go:147] duration metric: took 399.431194ms to wait for k8s-apps to be running ...
	I0120 16:43:41.203259 2197206 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 16:43:41.203319 2197206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:43:41.218649 2197206 system_svc.go:56] duration metric: took 15.377778ms WaitForService to wait for kubelet
	I0120 16:43:41.218683 2197206 kubeadm.go:582] duration metric: took 9.608759794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 16:43:41.218707 2197206 node_conditions.go:102] verifying NodePressure condition ...
	I0120 16:43:41.404150 2197206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 16:43:41.404181 2197206 node_conditions.go:123] node cpu capacity is 2
	I0120 16:43:41.404194 2197206 node_conditions.go:105] duration metric: took 185.483174ms to run NodePressure ...
	I0120 16:43:41.404207 2197206 start.go:241] waiting for startup goroutines ...
	I0120 16:43:41.404213 2197206 start.go:246] waiting for cluster config update ...
	I0120 16:43:41.404225 2197206 start.go:255] writing updated cluster config ...
	I0120 16:43:41.404496 2197206 ssh_runner.go:195] Run: rm -f paused
	I0120 16:43:41.457290 2197206 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 16:43:41.459151 2197206 out.go:177] * Done! kubectl is now configured to use "bridge-708138" cluster and "default" namespace by default
	I0120 16:43:40.825164 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:43.325186 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:45.825830 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:48.325148 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:50.325324 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:52.825144 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:54.825386 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:57.325511 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:43:59.825432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:01.826019 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:04.324951 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:06.327813 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:08.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:10.825618 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:13.325998 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:15.824909 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:18.325253 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:20.325659 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:22.825615 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:25.324569 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:27.324668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:29.325114 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:31.824591 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:33.825417 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:36.325425 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:38.326595 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:40.825370 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:43.325332 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:45.825470 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:48.325279 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:50.825752 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:53.326233 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:55.327674 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:57.824868 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:44:59.825796 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:02.325316 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:04.325859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:06.825325 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:09.325718 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:11.825001 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:14.324938 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:16.325124 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:18.325501 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:20.825364 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:22.827208 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:25.325469 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:27.825982 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:30.325432 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:32.325551 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:34.825047 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:36.825526 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:39.325753 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:41.825898 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:44.325151 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:46.325219 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:48.325661 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:50.826115 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:53.325524 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:55.825672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:45:57.825995 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:00.325672 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:02.824695 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:04.825548 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:07.325274 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:09.325798 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:11.824561 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:13.825167 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:15.825328 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:18.324814 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:20.824710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:22.825668 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:25.325111 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:27.824859 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:29.825200 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:32.328676 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:34.825710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:36.826122 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:39.324220 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:41.324710 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:43.325287 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:45.325431 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:47.824648 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:49.825286 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:51.825539 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:53.825772 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:55.826486 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:46:58.324721 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:00.325134 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:02.825138 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324759 2195552 node_ready.go:53] node "flannel-708138" has status "Ready":"False"
	I0120 16:47:05.324796 2195552 node_ready.go:38] duration metric: took 4m0.003559137s for node "flannel-708138" to be "Ready" ...
	I0120 16:47:05.327110 2195552 out.go:201] 
	W0120 16:47:05.328484 2195552 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0120 16:47:05.328509 2195552 out.go:270] * 
	W0120 16:47:05.329391 2195552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 16:47:05.331128 2195552 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.686679141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392106686646724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df73210c-8d19-4906-ba6a-e8b50670e8fa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.687391140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8899f4f6-1835-4624-8d09-f2965763131e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.687448150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8899f4f6-1835-4624-8d09-f2965763131e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.687478644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8899f4f6-1835-4624-8d09-f2965763131e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.722884753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2967c66-99e5-4a0c-9c05-2fa911969dc1 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.722965998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2967c66-99e5-4a0c-9c05-2fa911969dc1 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.724502967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b40a45ec-efc8-422b-ac9e-079071104019 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.725017335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392106724969833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b40a45ec-efc8-422b-ac9e-079071104019 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.725821104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e34ae74-8d7f-40a2-abc1-733736ef085b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.725871554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e34ae74-8d7f-40a2-abc1-733736ef085b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.725911020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e34ae74-8d7f-40a2-abc1-733736ef085b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.764689561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83c2aea0-0549-464c-a4b9-4b026f83f2c8 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.764829562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83c2aea0-0549-464c-a4b9-4b026f83f2c8 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.766123803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32fd0471-ba7f-4fb8-8235-8baa7253cf99 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.766543129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392106766524190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32fd0471-ba7f-4fb8-8235-8baa7253cf99 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.767213808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=956fe13c-745c-4a1e-afa9-e9a82c0f8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.767301187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=956fe13c-745c-4a1e-afa9-e9a82c0f8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.767336791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=956fe13c-745c-4a1e-afa9-e9a82c0f8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.805727305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd16de8a-3159-49c4-b580-16678478fa27 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.805903002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd16de8a-3159-49c4-b580-16678478fa27 name=/runtime.v1.RuntimeService/Version
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.807141895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d203c95f-b399-487f-9560-c312e3c0b7e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.807661144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737392106807631954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d203c95f-b399-487f-9560-c312e3c0b7e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.808377052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1f5d3d1-4603-48d6-84b5-5e4f1a9afe07 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.808470573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1f5d3d1-4603-48d6-84b5-5e4f1a9afe07 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 16:55:06 old-k8s-version-806597 crio[634]: time="2025-01-20 16:55:06.808529218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f1f5d3d1-4603-48d6-84b5-5e4f1a9afe07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 16:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055270] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137706] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.025165] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.690534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.917675] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.063794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072347] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.228149] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.140637] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.243673] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.859627] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059674] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.198086] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.593379] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 16:36] systemd-fstab-generator[5007]: Ignoring "noauto" option for root device
	[Jan20 16:38] systemd-fstab-generator[5282]: Ignoring "noauto" option for root device
	[  +0.065823] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:55:06 up 22 min,  0 users,  load average: 0.02, 0.05, 0.07
	Linux old-k8s-version-806597 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000b74510)
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: goroutine 156 [select]:
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b7bef0, 0x4f0ac20, 0xc000b760f0, 0x1, 0xc0001000c0)
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e9180, 0xc0001000c0)
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008c7480, 0xc000b60c80)
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 20 16:55:03 old-k8s-version-806597 kubelet[7052]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 20 16:55:03 old-k8s-version-806597 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 16:55:03 old-k8s-version-806597 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 16:55:03 old-k8s-version-806597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 170.
	Jan 20 16:55:03 old-k8s-version-806597 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 16:55:03 old-k8s-version-806597 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 16:55:04 old-k8s-version-806597 kubelet[7062]: I0120 16:55:04.067730    7062 server.go:416] Version: v1.20.0
	Jan 20 16:55:04 old-k8s-version-806597 kubelet[7062]: I0120 16:55:04.068563    7062 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 16:55:04 old-k8s-version-806597 kubelet[7062]: I0120 16:55:04.073208    7062 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 16:55:04 old-k8s-version-806597 kubelet[7062]: I0120 16:55:04.074639    7062 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 20 16:55:04 old-k8s-version-806597 kubelet[7062]: W0120 16:55:04.074861    7062 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 2 (238.415466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-806597" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (326.94s)

                                                
                                    

Test pass (242/304)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 4.84
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.07
18 TestDownloadOnly/v1.32.0/DeleteAll 0.16
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.64
22 TestOffline 58.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 136
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 8.53
35 TestAddons/parallel/Registry 15.9
37 TestAddons/parallel/InspektorGadget 11.98
38 TestAddons/parallel/MetricsServer 5.78
41 TestAddons/parallel/Headlamp 20.18
42 TestAddons/parallel/CloudSpanner 6.7
43 TestAddons/parallel/LocalPath 55.5
44 TestAddons/parallel/NvidiaDevicePlugin 7.32
45 TestAddons/parallel/Yakd 10.78
47 TestAddons/StoppedEnableDisable 91.31
48 TestCertOptions 62.8
49 TestCertExpiration 335.54
51 TestForceSystemdFlag 47.65
52 TestForceSystemdEnv 100.79
54 TestKVMDriverInstallOrUpdate 4.65
58 TestErrorSpam/setup 44.19
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.78
61 TestErrorSpam/pause 1.72
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 4.99
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.62
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.6
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
75 TestFunctional/serial/CacheCmd/cache/add_local 1.48
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 31.87
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.41
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.58
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 36.12
91 TestFunctional/parallel/DryRun 0.35
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 11.56
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.47
102 TestFunctional/parallel/CpCmd 1.39
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.42
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
125 TestFunctional/parallel/ProfileCmd/profile_list 0.43
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
127 TestFunctional/parallel/MountCmd/any-port 6.47
128 TestFunctional/parallel/MountCmd/specific-port 1.5
129 TestFunctional/parallel/ServiceCmd/List 0.26
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
131 TestFunctional/parallel/MountCmd/VerifyCleanup 0.78
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
133 TestFunctional/parallel/ServiceCmd/Format 0.34
134 TestFunctional/parallel/ServiceCmd/URL 0.33
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.49
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 203.19
160 TestMultiControlPlane/serial/DeployApp 5.17
161 TestMultiControlPlane/serial/PingHostFromPods 1.28
162 TestMultiControlPlane/serial/AddWorkerNode 55.79
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
165 TestMultiControlPlane/serial/CopyFile 13.77
166 TestMultiControlPlane/serial/StopSecondaryNode 91.72
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
168 TestMultiControlPlane/serial/RestartSecondaryNode 54.24
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 446.07
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.39
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
173 TestMultiControlPlane/serial/StopCluster 273.01
174 TestMultiControlPlane/serial/RestartCluster 128
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 78.92
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
181 TestJSONOutput/start/Command 59.71
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.64
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.38
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 92.4
213 TestMountStart/serial/StartWithMountFirst 26.23
214 TestMountStart/serial/VerifyMountFirst 0.4
215 TestMountStart/serial/StartWithMountSecond 29.79
216 TestMountStart/serial/VerifyMountSecond 0.41
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.4
219 TestMountStart/serial/Stop 2.3
220 TestMountStart/serial/RestartStopped 23.57
221 TestMountStart/serial/VerifyMountPostStop 0.4
224 TestMultiNode/serial/FreshStart2Nodes 119.04
225 TestMultiNode/serial/DeployApp2Nodes 5.39
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 51.77
228 TestMultiNode/serial/MultiNodeLabels 0.07
229 TestMultiNode/serial/ProfileList 0.62
230 TestMultiNode/serial/CopyFile 7.72
231 TestMultiNode/serial/StopNode 2.45
232 TestMultiNode/serial/StartAfterStop 38.82
233 TestMultiNode/serial/RestartKeepsNodes 329.06
234 TestMultiNode/serial/DeleteNode 2.74
235 TestMultiNode/serial/StopMultiNode 182.13
236 TestMultiNode/serial/RestartMultiNode 98.33
237 TestMultiNode/serial/ValidateNameConflict 45.37
244 TestScheduledStopUnix 114.89
248 TestRunningBinaryUpgrade 151.4
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 124.6
255 TestStoppedBinaryUpgrade/Setup 0.71
256 TestStoppedBinaryUpgrade/Upgrade 132.74
257 TestNoKubernetes/serial/StartWithStopK8s 41.06
258 TestNoKubernetes/serial/Start 25.24
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
260 TestNoKubernetes/serial/ProfileList 6.9
261 TestNoKubernetes/serial/Stop 1.4
262 TestNoKubernetes/serial/StartNoArgs 35.78
263 TestStoppedBinaryUpgrade/MinikubeLogs 1
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
273 TestPause/serial/Start 84.52
282 TestNetworkPlugins/group/false 4.6
289 TestStartStop/group/no-preload/serial/FirstStart 160.44
291 TestStartStop/group/embed-certs/serial/FirstStart 100.2
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.4
294 TestStartStop/group/embed-certs/serial/DeployApp 9.29
295 TestStartStop/group/no-preload/serial/DeployApp 9.29
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
297 TestStartStop/group/embed-certs/serial/Stop 91.08
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
299 TestStartStop/group/no-preload/serial/Stop 91.09
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.35
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/no-preload/serial/SecondStart 323.93
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 320.89
311 TestStartStop/group/old-k8s-version/serial/Stop 4.32
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/no-preload/serial/Pause 2.88
320 TestStartStop/group/newest-cni/serial/FirstStart 54.8
321 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
323 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.83
324 TestNetworkPlugins/group/auto/Start 79.75
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
327 TestStartStop/group/newest-cni/serial/Stop 7.37
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.41
329 TestStartStop/group/newest-cni/serial/SecondStart 38.88
330 TestNetworkPlugins/group/auto/KubeletFlags 0.25
331 TestNetworkPlugins/group/auto/NetCatPod 13.3
332 TestNetworkPlugins/group/auto/DNS 0.2
333 TestNetworkPlugins/group/auto/Localhost 0.15
334 TestNetworkPlugins/group/auto/HairPin 0.16
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
338 TestStartStop/group/newest-cni/serial/Pause 4.59
339 TestNetworkPlugins/group/kindnet/Start 65.85
341 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
343 TestNetworkPlugins/group/kindnet/NetCatPod 13.24
344 TestNetworkPlugins/group/kindnet/DNS 0.19
345 TestNetworkPlugins/group/kindnet/Localhost 0.12
346 TestNetworkPlugins/group/kindnet/HairPin 0.13
347 TestNetworkPlugins/group/custom-flannel/Start 68.49
348 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
349 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
350 TestNetworkPlugins/group/custom-flannel/DNS 0.21
351 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
352 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
353 TestNetworkPlugins/group/enable-default-cni/Start 85.42
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
357 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
358 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
359 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
361 TestNetworkPlugins/group/bridge/Start 69.51
362 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
363 TestNetworkPlugins/group/bridge/NetCatPod 11.26
364 TestNetworkPlugins/group/bridge/DNS 0.15
365 TestNetworkPlugins/group/bridge/Localhost 0.12
366 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (10.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-193100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-193100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.278351528s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 15:04:51.033036 2136749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 15:04:51.033146 2136749 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-193100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-193100: exit status 85 (65.635175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |          |
	|         | -p download-only-193100        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:04:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:04:40.798543 2136761 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:04:40.798717 2136761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:40.798729 2136761 out.go:358] Setting ErrFile to fd 2...
	I0120 15:04:40.798736 2136761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:40.798952 2136761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	W0120 15:04:40.799127 2136761 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20109-2129584/.minikube/config/config.json: open /home/jenkins/minikube-integration/20109-2129584/.minikube/config/config.json: no such file or directory
	I0120 15:04:40.799750 2136761 out.go:352] Setting JSON to true
	I0120 15:04:40.800790 2136761 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24427,"bootTime":1737361054,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:04:40.800902 2136761 start.go:139] virtualization: kvm guest
	I0120 15:04:40.803735 2136761 out.go:97] [download-only-193100] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:04:40.803928 2136761 notify.go:220] Checking for updates...
	W0120 15:04:40.803936 2136761 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 15:04:40.805381 2136761 out.go:169] MINIKUBE_LOCATION=20109
	I0120 15:04:40.806900 2136761 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:04:40.808212 2136761 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:04:40.809500 2136761 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:40.810762 2136761 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 15:04:40.813195 2136761 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 15:04:40.813482 2136761 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:04:40.851370 2136761 out.go:97] Using the kvm2 driver based on user configuration
	I0120 15:04:40.851404 2136761 start.go:297] selected driver: kvm2
	I0120 15:04:40.851415 2136761 start.go:901] validating driver "kvm2" against <nil>
	I0120 15:04:40.851765 2136761 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:40.851857 2136761 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 15:04:40.868029 2136761 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 15:04:40.868087 2136761 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 15:04:40.868649 2136761 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 15:04:40.868793 2136761 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 15:04:40.868840 2136761 cni.go:84] Creating CNI manager for ""
	I0120 15:04:40.868890 2136761 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 15:04:40.868899 2136761 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 15:04:40.868963 2136761 start.go:340] cluster config:
	{Name:download-only-193100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-193100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:04:40.869163 2136761 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 15:04:40.871141 2136761 out.go:97] Downloading VM boot image ...
	I0120 15:04:40.871188 2136761 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 15:04:44.586020 2136761 out.go:97] Starting "download-only-193100" primary control-plane node in "download-only-193100" cluster
	I0120 15:04:44.586066 2136761 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 15:04:44.612351 2136761 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 15:04:44.612400 2136761 cache.go:56] Caching tarball of preloaded images
	I0120 15:04:44.612579 2136761 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 15:04:44.614653 2136761 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 15:04:44.614692 2136761 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 15:04:44.647179 2136761 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 15:04:49.469993 2136761 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 15:04:49.470105 2136761 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 15:04:50.384017 2136761 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 15:04:50.384410 2136761 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/download-only-193100/config.json ...
	I0120 15:04:50.384444 2136761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/download-only-193100/config.json: {Name:mk1c9ada8dacfccd217bd0a90b65f38589c09c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 15:04:50.384621 2136761 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 15:04:50.384805 2136761 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-193100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-193100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-193100
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (4.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647713 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647713 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.836814599s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (4.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 15:04:56.228861 2136749 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 15:04:56.228912 2136749 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647713
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647713: exit status 85 (68.838268ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | -p download-only-193100        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| delete  | -p download-only-193100        | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
	| start   | -o=json --download-only        | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC |                     |
	|         | -p download-only-647713        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:04:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:04:51.436998 2136968 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:04:51.437125 2136968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:51.437133 2136968 out.go:358] Setting ErrFile to fd 2...
	I0120 15:04:51.437140 2136968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:04:51.437327 2136968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:04:51.437956 2136968 out.go:352] Setting JSON to true
	I0120 15:04:51.439009 2136968 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24437,"bootTime":1737361054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:04:51.439144 2136968 start.go:139] virtualization: kvm guest
	I0120 15:04:51.441365 2136968 out.go:97] [download-only-647713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:04:51.441535 2136968 notify.go:220] Checking for updates...
	I0120 15:04:51.442929 2136968 out.go:169] MINIKUBE_LOCATION=20109
	I0120 15:04:51.444165 2136968 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:04:51.445363 2136968 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:04:51.446442 2136968 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:04:51.447700 2136968 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-647713 host does not exist
	  To start a cluster, run: "minikube start -p download-only-647713"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-647713
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 15:04:56.883797 2136749 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-318745 --alsologtostderr --binary-mirror http://127.0.0.1:45603 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-318745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-318745
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (58.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-366716 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-366716 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.94401932s)
helpers_test.go:175: Cleaning up "offline-crio-366716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-366716
--- PASS: TestOffline (58.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-823768
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-823768: exit status 85 (58.93407ms)

                                                
                                                
-- stdout --
	* Profile "addons-823768" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823768"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-823768
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-823768: exit status 85 (58.410086ms)

                                                
                                                
-- stdout --
	* Profile "addons-823768" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823768"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (136s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-823768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-823768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.002060174s)
--- PASS: TestAddons/Setup (136.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-823768 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-823768 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-823768 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-823768 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [606ebe90-54f5-4442-a16c-ee4d7c99146e] Pending
helpers_test.go:344: "busybox" [606ebe90-54f5-4442-a16c-ee4d7c99146e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003419394s
addons_test.go:633: (dbg) Run:  kubectl --context addons-823768 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-823768 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-823768 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 9.872373ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004776316s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004227354s
addons_test.go:331: (dbg) Run:  kubectl --context addons-823768 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-823768 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-823768 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.956215312s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 ip
2025/01/20 15:07:46 [DEBUG] GET http://192.168.39.158:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vvtz6" [5f70c340-a70e-4f10-ac54-0bd3d47ead3c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004784484s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable inspektor-gadget --alsologtostderr -v=1: (5.974298266s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.249045ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005139461s
addons_test.go:402: (dbg) Run:  kubectl --context addons-823768 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-823768 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-823768 --alsologtostderr -v=1: (1.209263793s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-hl6wk" [f3879f39-1dac-4485-9443-b778ec3cc6a2] Pending
helpers_test.go:344: "headlamp-69d78d796f-hl6wk" [f3879f39-1dac-4485-9443-b778ec3cc6a2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-hl6wk" [f3879f39-1dac-4485-9443-b778ec3cc6a2] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.007890261s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable headlamp --alsologtostderr -v=1: (5.964125474s)
--- PASS: TestAddons/parallel/Headlamp (20.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-4kh4r" [a1d4ee03-2c50-47ff-a144-da95e32792f5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004712219s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-823768 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-823768 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9d041dd4-0e90-41d5-bed2-c93728392dac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9d041dd4-0e90-41d5-bed2-c93728392dac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9d041dd4-0e90-41d5-bed2-c93728392dac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005279778s
addons_test.go:906: (dbg) Run:  kubectl --context addons-823768 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 ssh "cat /opt/local-path-provisioner/pvc-f17509c2-6d0e-4c09-9067-5f1359f0d7a1_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-823768 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-823768 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.603533589s)
--- PASS: TestAddons/parallel/LocalPath (55.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004425728s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.315502128s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.32s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-xvzkb" [e820f690-c7a6-4fe7-b2ba-13674cb4fa56] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004751866s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable yakd --alsologtostderr -v=1: (5.770097729s)
--- PASS: TestAddons/parallel/Yakd (10.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-823768
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-823768: (1m31.00398627s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-823768
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-823768
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-823768
--- PASS: TestAddons/StoppedEnableDisable (91.31s)

                                                
                                    
x
+
TestCertOptions (62.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-435922 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-435922 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m1.296781106s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-435922 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-435922 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-435922 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-435922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-435922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-435922: (1.032960839s)
--- PASS: TestCertOptions (62.80s)

                                                
                                    
x
+
TestCertExpiration (335.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-448539 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-448539 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.73344389s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-448539 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0120 16:23:37.323925 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-448539 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m21.693370102s)
helpers_test.go:175: Cleaning up "cert-expiration-448539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-448539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-448539: (1.110402157s)
--- PASS: TestCertExpiration (335.54s)

                                                
                                    
x
+
TestForceSystemdFlag (47.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-860028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0120 16:24:28.733287 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:45.663788 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-860028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.21552608s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-860028 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-860028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-860028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-860028: (1.175190853s)
--- PASS: TestForceSystemdFlag (47.65s)

                                                
                                    
x
+
TestForceSystemdEnv (100.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-417532 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-417532 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m39.747401903s)
helpers_test.go:175: Cleaning up "force-systemd-env-417532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-417532
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-417532: (1.042170635s)
--- PASS: TestForceSystemdEnv (100.79s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 16:25:05.558739 2136749 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 16:25:05.558928 2136749 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 16:25:05.600269 2136749 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 16:25:05.600699 2136749 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 16:25:05.600770 2136749 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate858879667/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                    
x
+
TestErrorSpam/setup (44.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-342259 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-342259 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-342259 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-342259 --driver=kvm2  --container-runtime=crio: (44.188652616s)
--- PASS: TestErrorSpam/setup (44.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (4.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 stop: (2.343568825s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-342259 --log_dir /tmp/nospam-342259 stop: (1.732305457s)
--- PASS: TestErrorSpam/stop (4.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20109-2129584/.minikube/files/etc/test/nested/copy/2136749/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0120 15:17:14.250162 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.256606 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.268097 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.289533 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.330999 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.412578 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.574213 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:14.895874 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:15.538000 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:16.819706 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:19.382757 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:24.505083 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:34.746418 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:17:55.227858 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-232451 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.616701895s)
--- PASS: TestFunctional/serial/StartWithProxy (83.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 15:18:04.720040 2136749 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --alsologtostderr -v=8
E0120 15:18:36.190117 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-232451 --alsologtostderr -v=8: (53.597098254s)
functional_test.go:663: soft start took 53.597980276s for "functional-232451" cluster.
I0120 15:18:58.317550 2136749 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (53.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-232451 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:3.1: (1.115948825s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:3.3: (1.312790146s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 cache add registry.k8s.io/pause:latest: (1.158567334s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-232451 /tmp/TestFunctionalserialCacheCmdcacheadd_local2785708675/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache add minikube-local-cache-test:functional-232451
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 cache add minikube-local-cache-test:functional-232451: (1.142204631s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache delete minikube-local-cache-test:functional-232451
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-232451
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.554669ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 cache reload: (1.05507071s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 kubectl -- --context functional-232451 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-232451 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-232451 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.871475624s)
functional_test.go:761: restart took 31.871591396s for "functional-232451" cluster.
I0120 15:19:37.850990 2136749 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (31.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-232451 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 logs: (1.410658375s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 logs --file /tmp/TestFunctionalserialLogsFileCmd577206176/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 logs --file /tmp/TestFunctionalserialLogsFileCmd577206176/001/logs.txt: (1.488307202s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-232451 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-232451
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-232451: exit status 115 (291.46994ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.125:31762 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-232451 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-232451 delete -f testdata/invalidsvc.yaml: (1.081406106s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 config get cpus: exit status 14 (68.716548ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 config get cpus: exit status 14 (64.6697ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (36.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-232451 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-232451 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2146310: os: process already finished
E0120 15:22:14.249895 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:22:41.954836 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (36.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 15:19:58.112225 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-232451 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.84382ms)

                                                
                                                
-- stdout --
	* [functional-232451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:19:58.097623 2145814 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:19:58.097909 2145814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.097923 2145814 out.go:358] Setting ErrFile to fd 2...
	I0120 15:19:58.097931 2145814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.098252 2145814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:19:58.099043 2145814 out.go:352] Setting JSON to false
	I0120 15:19:58.100460 2145814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25344,"bootTime":1737361054,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:19:58.100557 2145814 start.go:139] virtualization: kvm guest
	I0120 15:19:58.104741 2145814 out.go:177] * [functional-232451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:19:58.107015 2145814 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:19:58.107060 2145814 notify.go:220] Checking for updates...
	I0120 15:19:58.110937 2145814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:19:58.112825 2145814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:19:58.114649 2145814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:19:58.116124 2145814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:19:58.117756 2145814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:19:58.119787 2145814 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.120256 2145814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.120328 2145814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.139032 2145814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0120 15:19:58.139546 2145814 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.140110 2145814 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.140133 2145814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.140610 2145814 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.140804 2145814 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.141135 2145814 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.141608 2145814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.141704 2145814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.159694 2145814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38627
	I0120 15:19:58.160243 2145814 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.160844 2145814 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.160867 2145814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.161201 2145814 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.161425 2145814 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.202346 2145814 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 15:19:58.203841 2145814 start.go:297] selected driver: kvm2
	I0120 15:19:58.203861 2145814 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.204032 2145814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.206427 2145814 out.go:201] 
	W0120 15:19:58.207783 2145814 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 15:19:58.209027 2145814 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-232451 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-232451 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (176.428394ms)

                                                
                                                
-- stdout --
	* [functional-232451] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:19:58.186339 2145841 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:19:58.186498 2145841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.186512 2145841 out.go:358] Setting ErrFile to fd 2...
	I0120 15:19:58.186518 2145841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:19:58.186974 2145841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:19:58.187766 2145841 out.go:352] Setting JSON to false
	I0120 15:19:58.189449 2145841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25344,"bootTime":1737361054,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:19:58.189552 2145841 start.go:139] virtualization: kvm guest
	I0120 15:19:58.191908 2145841 out.go:177] * [functional-232451] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 15:19:58.193638 2145841 notify.go:220] Checking for updates...
	I0120 15:19:58.193959 2145841 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:19:58.195319 2145841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:19:58.196695 2145841 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 15:19:58.198237 2145841 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 15:19:58.199562 2145841 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:19:58.200916 2145841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:19:58.202926 2145841 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:19:58.203353 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.203409 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.224216 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0120 15:19:58.224903 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.225621 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.225648 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.226111 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.226362 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.226761 2145841 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:19:58.227211 2145841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:19:58.227278 2145841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:19:58.247151 2145841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0120 15:19:58.248682 2145841 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:19:58.249311 2145841 main.go:141] libmachine: Using API Version  1
	I0120 15:19:58.249336 2145841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:19:58.249827 2145841 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:19:58.250043 2145841 main.go:141] libmachine: (functional-232451) Calling .DriverName
	I0120 15:19:58.292070 2145841 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 15:19:58.293487 2145841 start.go:297] selected driver: kvm2
	I0120 15:19:58.293506 2145841 start.go:901] validating driver "kvm2" against &{Name:functional-232451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-232451 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:19:58.293670 2145841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:19:58.295749 2145841 out.go:201] 
	W0120 15:19:58.297021 2145841 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 15:19:58.298375 2145841 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-232451 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-232451 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-fthjw" [d1ca943d-f059-4c1b-8f13-95f6a0bb8c97] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-fthjw" [d1ca943d-f059-4c1b-8f13-95f6a0bb8c97] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00601508s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.125:31995
functional_test.go:1675: http://192.168.39.125:31995: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-fthjw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.125:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.125:31995
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh -n functional-232451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cp functional-232451:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1946812816/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh -n functional-232451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh -n functional-232451 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2136749/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /etc/test/nested/copy/2136749/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2136749.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /etc/ssl/certs/2136749.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2136749.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /usr/share/ca-certificates/2136749.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/21367492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /etc/ssl/certs/21367492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/21367492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /usr/share/ca-certificates/21367492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-232451 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh "sudo systemctl is-active docker": exit status 1 (229.563083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh "sudo systemctl is-active containerd": exit status 1 (225.848158ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-232451 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-232451 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-xwqfj" [527fee39-02e6-465a-9d2c-d3b6ba261a85] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-xwqfj" [527fee39-02e6-465a-9d2c-d3b6ba261a85] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005422648s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "373.030761ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.675837ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "302.822157ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.622236ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdany-port3439113775/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737386389101265941" to /tmp/TestFunctionalparallelMountCmdany-port3439113775/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737386389101265941" to /tmp/TestFunctionalparallelMountCmdany-port3439113775/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737386389101265941" to /tmp/TestFunctionalparallelMountCmdany-port3439113775/001/test-1737386389101265941
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.670543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 15:19:49.367262 2136749 retry.go:31] will retry after 485.538464ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 15:19 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 15:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 15:19 test-1737386389101265941
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh cat /mount-9p/test-1737386389101265941
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-232451 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3d0e6cde-9a6b-4e88-9d49-dd51c9baa239] Pending
helpers_test.go:344: "busybox-mount" [3d0e6cde-9a6b-4e88-9d49-dd51c9baa239] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3d0e6cde-9a6b-4e88-9d49-dd51c9baa239] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3d0e6cde-9a6b-4e88-9d49-dd51c9baa239] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004380774s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-232451 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdany-port3439113775/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdspecific-port1738715858/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.027414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 15:19:55.768666 2136749 retry.go:31] will retry after 288.353282ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdspecific-port1738715858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh "sudo umount -f /mount-9p": exit status 1 (201.978333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-232451 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdspecific-port1738715858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service list -o json
functional_test.go:1494: Took "307.745269ms" to run "out/minikube-linux-amd64 -p functional-232451 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-232451 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-232451 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1665204611/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.125:32763
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.125:32763
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-232451 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-232451
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-232451 image ls --format short --alsologtostderr:
I0120 15:20:02.748812 2146550 out.go:345] Setting OutFile to fd 1 ...
I0120 15:20:02.748939 2146550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:02.748949 2146550 out.go:358] Setting ErrFile to fd 2...
I0120 15:20:02.748953 2146550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:02.749152 2146550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:20:02.749830 2146550 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:02.749957 2146550 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:02.750338 2146550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:02.750411 2146550 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:02.768603 2146550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
I0120 15:20:02.769146 2146550 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:02.769757 2146550 main.go:141] libmachine: Using API Version  1
I0120 15:20:02.769783 2146550 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:02.770175 2146550 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:02.770423 2146550 main.go:141] libmachine: (functional-232451) Calling .GetState
I0120 15:20:02.772513 2146550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:02.772565 2146550 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:02.789699 2146550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
I0120 15:20:02.790586 2146550 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:02.791172 2146550 main.go:141] libmachine: Using API Version  1
I0120 15:20:02.791197 2146550 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:02.791551 2146550 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:02.791784 2146550 main.go:141] libmachine: (functional-232451) Calling .DriverName
I0120 15:20:02.791989 2146550 ssh_runner.go:195] Run: systemctl --version
I0120 15:20:02.792025 2146550 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
I0120 15:20:02.795179 2146550 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:02.795618 2146550 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
I0120 15:20:02.795656 2146550 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:02.795801 2146550 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
I0120 15:20:02.796009 2146550 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
I0120 15:20:02.796199 2146550 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
I0120 15:20:02.796335 2146550 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
I0120 15:20:02.874045 2146550 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:20:02.912082 2146550 main.go:141] libmachine: Making call to close driver server
I0120 15:20:02.912101 2146550 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:02.912423 2146550 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:02.912459 2146550 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:02.912462 2146550 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
I0120 15:20:02.912468 2146550 main.go:141] libmachine: Making call to close driver server
I0120 15:20:02.912475 2146550 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:02.912758 2146550 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:02.912779 2146550 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:02.912783 2146550 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-232451 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| localhost/minikube-local-cache-test     | functional-232451  | 186e518c736ad | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.0            | a389e107f4ff1 | 70.6MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.32.0            | 040f9f8aac8cd | 95.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-232451  | 436a0a442eaf7 | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.32.0            | 8cab3d2a8bd0f | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.0            | c2e17b8d0f4a3 | 98.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-232451 image ls --format table --alsologtostderr:
I0120 15:20:06.083533 2146716 out.go:345] Setting OutFile to fd 1 ...
I0120 15:20:06.083689 2146716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:06.083701 2146716 out.go:358] Setting ErrFile to fd 2...
I0120 15:20:06.083708 2146716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:06.083900 2146716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:20:06.084600 2146716 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:06.084736 2146716 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:06.085138 2146716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:06.085222 2146716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:06.101966 2146716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
I0120 15:20:06.102521 2146716 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:06.103205 2146716 main.go:141] libmachine: Using API Version  1
I0120 15:20:06.103233 2146716 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:06.103620 2146716 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:06.103845 2146716 main.go:141] libmachine: (functional-232451) Calling .GetState
I0120 15:20:06.106108 2146716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:06.106160 2146716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:06.122435 2146716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
I0120 15:20:06.122961 2146716 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:06.123599 2146716 main.go:141] libmachine: Using API Version  1
I0120 15:20:06.123640 2146716 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:06.123973 2146716 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:06.124182 2146716 main.go:141] libmachine: (functional-232451) Calling .DriverName
I0120 15:20:06.124390 2146716 ssh_runner.go:195] Run: systemctl --version
I0120 15:20:06.124419 2146716 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
I0120 15:20:06.127448 2146716 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:06.127966 2146716 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
I0120 15:20:06.128004 2146716 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:06.128148 2146716 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
I0120 15:20:06.128316 2146716 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
I0120 15:20:06.128461 2146716 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
I0120 15:20:06.128637 2146716 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
I0120 15:20:06.205844 2146716 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:20:06.249381 2146716 main.go:141] libmachine: Making call to close driver server
I0120 15:20:06.249408 2146716 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:06.249759 2146716 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
I0120 15:20:06.249766 2146716 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:06.249797 2146716 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:06.249807 2146716 main.go:141] libmachine: Making call to close driver server
I0120 15:20:06.249815 2146716 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:06.250113 2146716 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:06.250136 2146716 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:06.250165 2146716 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
2025/01/20 15:20:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-232451 image ls --format json --alsologtostderr:
[{"id":"040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4","registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"95270297"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204
b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac","registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"90789190"},{"id":"186e518c736ad225cfbcea383a214b6b92dd50
bad4a5f8e0104c9de467df6b5e","repoDigests":["localhost/minikube-local-cache-test@sha256:4431cdc56a397c93898b03324d64acb22e919011bf4a94d7f14a89a940951c47"],"repoTags":["localhost/minikube-local-cache-test:functional-232451"],"size":"3330"},{"id":"436a0a442eaf743db2754e9cd2162da2ce8952fdce829163246cb15ec5e77e8a","repoDigests":["localhost/my-image@sha256:1272ee07260d12d5bb862b319a19216abf5fc9fcb13f762a733dd7bac6ec2bde"],"repoTags":["localhost/my-image:functional-232451"],"size":"1468600"},{"id":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b","registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"98051552"},{"id":"f0e944554a33867404c55a4d52527804197706eba1b1763ddb66784f2d0da6c2","repoDigests":["docker.io/library/0196ca273a41a101cc9cef01ee687df56b8ba641f836ba6e609b3d7
cf02af6fa-tmp@sha256:eba0931bffbc0772f96d5ea8f43c6b0e209d87671962f55a28cc38b2bda686aa"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba1
8589a048b3cf32163dac0708522f3b991974fafdec","registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"70649156"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox
@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-232451 image ls --format json --alsologtostderr:
I0120 15:20:05.866748 2146692 out.go:345] Setting OutFile to fd 1 ...
I0120 15:20:05.866859 2146692 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:05.866864 2146692 out.go:358] Setting ErrFile to fd 2...
I0120 15:20:05.866868 2146692 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:05.867084 2146692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:20:05.867744 2146692 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:05.867852 2146692 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:05.868209 2146692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:05.868253 2146692 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:05.884641 2146692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
I0120 15:20:05.885174 2146692 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:05.885830 2146692 main.go:141] libmachine: Using API Version  1
I0120 15:20:05.885859 2146692 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:05.886220 2146692 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:05.886446 2146692 main.go:141] libmachine: (functional-232451) Calling .GetState
I0120 15:20:05.888555 2146692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:05.888641 2146692 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:05.905083 2146692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
I0120 15:20:05.905610 2146692 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:05.906185 2146692 main.go:141] libmachine: Using API Version  1
I0120 15:20:05.906218 2146692 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:05.906529 2146692 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:05.906788 2146692 main.go:141] libmachine: (functional-232451) Calling .DriverName
I0120 15:20:05.906997 2146692 ssh_runner.go:195] Run: systemctl --version
I0120 15:20:05.907026 2146692 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
I0120 15:20:05.909634 2146692 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:05.910017 2146692 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
I0120 15:20:05.910036 2146692 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:05.910206 2146692 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
I0120 15:20:05.910415 2146692 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
I0120 15:20:05.910571 2146692 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
I0120 15:20:05.910736 2146692 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
I0120 15:20:05.986451 2146692 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:20:06.025450 2146692 main.go:141] libmachine: Making call to close driver server
I0120 15:20:06.025463 2146692 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:06.025806 2146692 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:06.025832 2146692 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:06.025839 2146692 main.go:141] libmachine: Making call to close driver server
I0120 15:20:06.025846 2146692 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:06.025845 2146692 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
I0120 15:20:06.026114 2146692 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:06.026132 2146692 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-232451 image ls --format yaml --alsologtostderr:
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
- registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "98051552"
- id: 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "90789190"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 186e518c736ad225cfbcea383a214b6b92dd50bad4a5f8e0104c9de467df6b5e
repoDigests:
- localhost/minikube-local-cache-test@sha256:4431cdc56a397c93898b03324d64acb22e919011bf4a94d7f14a89a940951c47
repoTags:
- localhost/minikube-local-cache-test:functional-232451
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "70649156"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
- registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "95270297"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-232451 image ls --format yaml --alsologtostderr:
I0120 15:20:02.969698 2146574 out.go:345] Setting OutFile to fd 1 ...
I0120 15:20:02.969815 2146574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:02.969826 2146574 out.go:358] Setting ErrFile to fd 2...
I0120 15:20:02.969831 2146574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:02.970050 2146574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:20:02.970788 2146574 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:02.970936 2146574 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:02.971397 2146574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:02.971480 2146574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:02.987973 2146574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
I0120 15:20:02.988584 2146574 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:02.989475 2146574 main.go:141] libmachine: Using API Version  1
I0120 15:20:02.989504 2146574 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:02.989897 2146574 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:02.990166 2146574 main.go:141] libmachine: (functional-232451) Calling .GetState
I0120 15:20:02.992245 2146574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:02.992295 2146574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:03.009594 2146574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
I0120 15:20:03.010169 2146574 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:03.010837 2146574 main.go:141] libmachine: Using API Version  1
I0120 15:20:03.010867 2146574 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:03.011324 2146574 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:03.011547 2146574 main.go:141] libmachine: (functional-232451) Calling .DriverName
I0120 15:20:03.011771 2146574 ssh_runner.go:195] Run: systemctl --version
I0120 15:20:03.011812 2146574 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
I0120 15:20:03.014947 2146574 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:03.015462 2146574 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
I0120 15:20:03.015499 2146574 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:03.015680 2146574 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
I0120 15:20:03.015884 2146574 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
I0120 15:20:03.016055 2146574 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
I0120 15:20:03.016194 2146574 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
I0120 15:20:03.093594 2146574 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:20:03.130426 2146574 main.go:141] libmachine: Making call to close driver server
I0120 15:20:03.130447 2146574 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:03.130800 2146574 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:03.130825 2146574 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:03.130831 2146574 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
I0120 15:20:03.130834 2146574 main.go:141] libmachine: Making call to close driver server
I0120 15:20:03.130844 2146574 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:03.131108 2146574 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:03.131142 2146574 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:03.131189 2146574 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-232451 ssh pgrep buildkitd: exit status 1 (203.498945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image build -t localhost/my-image:functional-232451 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-232451 image build -t localhost/my-image:functional-232451 testdata/build --alsologtostderr: (2.240901071s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-232451 image build -t localhost/my-image:functional-232451 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f0e944554a3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-232451
--> 436a0a442ea
Successfully tagged localhost/my-image:functional-232451
436a0a442eaf743db2754e9cd2162da2ce8952fdce829163246cb15ec5e77e8a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-232451 image build -t localhost/my-image:functional-232451 testdata/build --alsologtostderr:
I0120 15:20:03.390338 2146628 out.go:345] Setting OutFile to fd 1 ...
I0120 15:20:03.390597 2146628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:03.390626 2146628 out.go:358] Setting ErrFile to fd 2...
I0120 15:20:03.390630 2146628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:20:03.390821 2146628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:20:03.391464 2146628 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:03.392085 2146628 config.go:182] Loaded profile config "functional-232451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:20:03.392477 2146628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:03.392522 2146628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:03.409011 2146628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
I0120 15:20:03.409580 2146628 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:03.410281 2146628 main.go:141] libmachine: Using API Version  1
I0120 15:20:03.410310 2146628 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:03.410680 2146628 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:03.410894 2146628 main.go:141] libmachine: (functional-232451) Calling .GetState
I0120 15:20:03.412833 2146628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:20:03.412884 2146628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:20:03.429758 2146628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
I0120 15:20:03.430324 2146628 main.go:141] libmachine: () Calling .GetVersion
I0120 15:20:03.430919 2146628 main.go:141] libmachine: Using API Version  1
I0120 15:20:03.430943 2146628 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:20:03.431335 2146628 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:20:03.431547 2146628 main.go:141] libmachine: (functional-232451) Calling .DriverName
I0120 15:20:03.431785 2146628 ssh_runner.go:195] Run: systemctl --version
I0120 15:20:03.431820 2146628 main.go:141] libmachine: (functional-232451) Calling .GetSSHHostname
I0120 15:20:03.434666 2146628 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:03.435018 2146628 main.go:141] libmachine: (functional-232451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:af:57", ip: ""} in network mk-functional-232451: {Iface:virbr1 ExpiryTime:2025-01-20 16:16:56 +0000 UTC Type:0 Mac:52:54:00:69:af:57 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:functional-232451 Clientid:01:52:54:00:69:af:57}
I0120 15:20:03.435065 2146628 main.go:141] libmachine: (functional-232451) DBG | domain functional-232451 has defined IP address 192.168.39.125 and MAC address 52:54:00:69:af:57 in network mk-functional-232451
I0120 15:20:03.435146 2146628 main.go:141] libmachine: (functional-232451) Calling .GetSSHPort
I0120 15:20:03.435356 2146628 main.go:141] libmachine: (functional-232451) Calling .GetSSHKeyPath
I0120 15:20:03.435509 2146628 main.go:141] libmachine: (functional-232451) Calling .GetSSHUsername
I0120 15:20:03.435664 2146628 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/functional-232451/id_rsa Username:docker}
I0120 15:20:03.513773 2146628 build_images.go:161] Building image from path: /tmp/build.3718719599.tar
I0120 15:20:03.513868 2146628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 15:20:03.527121 2146628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3718719599.tar
I0120 15:20:03.532268 2146628 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3718719599.tar: stat -c "%s %y" /var/lib/minikube/build/build.3718719599.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3718719599.tar': No such file or directory
I0120 15:20:03.532320 2146628 ssh_runner.go:362] scp /tmp/build.3718719599.tar --> /var/lib/minikube/build/build.3718719599.tar (3072 bytes)
I0120 15:20:03.561032 2146628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3718719599
I0120 15:20:03.573375 2146628 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3718719599 -xf /var/lib/minikube/build/build.3718719599.tar
I0120 15:20:03.585607 2146628 crio.go:315] Building image: /var/lib/minikube/build/build.3718719599
I0120 15:20:03.585730 2146628 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-232451 /var/lib/minikube/build/build.3718719599 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0120 15:20:05.541399 2146628 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-232451 /var/lib/minikube/build/build.3718719599 --cgroup-manager=cgroupfs: (1.955628513s)
I0120 15:20:05.541521 2146628 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3718719599
I0120 15:20:05.562450 2146628 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3718719599.tar
I0120 15:20:05.573705 2146628 build_images.go:217] Built localhost/my-image:functional-232451 from /tmp/build.3718719599.tar
I0120 15:20:05.573764 2146628 build_images.go:133] succeeded building to: functional-232451
I0120 15:20:05.573770 2146628 build_images.go:134] failed building to: 
I0120 15:20:05.573845 2146628 main.go:141] libmachine: Making call to close driver server
I0120 15:20:05.573858 2146628 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:05.574253 2146628 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:05.574275 2146628 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:20:05.574283 2146628 main.go:141] libmachine: Making call to close driver server
I0120 15:20:05.574290 2146628 main.go:141] libmachine: (functional-232451) Calling .Close
I0120 15:20:05.574291 2146628 main.go:141] libmachine: (functional-232451) DBG | Closing plugin on server side
I0120 15:20:05.574562 2146628 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:20:05.574582 2146628 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image rm kicbase/echo-server:functional-232451 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-232451 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-232451
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-232451
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-232451
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218458 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 15:32:14.249586 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-218458 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.478208073s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-218458 -- rollout status deployment/busybox: (2.913979941s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-njmxg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-qxv2s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-spslp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-njmxg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-qxv2s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-spslp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-njmxg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-qxv2s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-spslp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-njmxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-njmxg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-qxv2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-qxv2s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-spslp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218458 -- exec busybox-58667487b6-spslp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-218458 -v=7 --alsologtostderr
E0120 15:33:37.316576 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-218458 -v=7 --alsologtostderr: (54.891748846s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-218458 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp testdata/cp-test.txt ha-218458:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2269508709/001/cp-test_ha-218458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458:/home/docker/cp-test.txt ha-218458-m02:/home/docker/cp-test_ha-218458_ha-218458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test_ha-218458_ha-218458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458:/home/docker/cp-test.txt ha-218458-m03:/home/docker/cp-test_ha-218458_ha-218458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test_ha-218458_ha-218458-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458:/home/docker/cp-test.txt ha-218458-m04:/home/docker/cp-test_ha-218458_ha-218458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test_ha-218458_ha-218458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp testdata/cp-test.txt ha-218458-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2269508709/001/cp-test_ha-218458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m02:/home/docker/cp-test.txt ha-218458:/home/docker/cp-test_ha-218458-m02_ha-218458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test_ha-218458-m02_ha-218458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m02:/home/docker/cp-test.txt ha-218458-m03:/home/docker/cp-test_ha-218458-m02_ha-218458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test_ha-218458-m02_ha-218458-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m02:/home/docker/cp-test.txt ha-218458-m04:/home/docker/cp-test_ha-218458-m02_ha-218458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test_ha-218458-m02_ha-218458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp testdata/cp-test.txt ha-218458-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2269508709/001/cp-test_ha-218458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m03:/home/docker/cp-test.txt ha-218458:/home/docker/cp-test_ha-218458-m03_ha-218458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test_ha-218458-m03_ha-218458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m03:/home/docker/cp-test.txt ha-218458-m02:/home/docker/cp-test_ha-218458-m03_ha-218458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test_ha-218458-m03_ha-218458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m03:/home/docker/cp-test.txt ha-218458-m04:/home/docker/cp-test_ha-218458-m03_ha-218458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test_ha-218458-m03_ha-218458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp testdata/cp-test.txt ha-218458-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2269508709/001/cp-test_ha-218458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m04:/home/docker/cp-test.txt ha-218458:/home/docker/cp-test_ha-218458-m04_ha-218458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458 "sudo cat /home/docker/cp-test_ha-218458-m04_ha-218458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m04:/home/docker/cp-test.txt ha-218458-m02:/home/docker/cp-test_ha-218458-m04_ha-218458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m02 "sudo cat /home/docker/cp-test_ha-218458-m04_ha-218458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 cp ha-218458-m04:/home/docker/cp-test.txt ha-218458-m03:/home/docker/cp-test_ha-218458-m04_ha-218458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 ssh -n ha-218458-m03 "sudo cat /home/docker/cp-test_ha-218458-m04_ha-218458-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 node stop m02 -v=7 --alsologtostderr
E0120 15:34:45.664080 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.670584 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.682022 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.703478 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.745007 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.826526 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:45.988129 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:46.310419 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:46.952547 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:48.234224 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:50.796144 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:34:55.918250 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:35:06.160522 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:35:26.641851 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:36:07.603391 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-218458 node stop m02 -v=7 --alsologtostderr: (1m31.021386919s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr: exit status 7 (694.899299ms)

                                                
                                                
-- stdout --
	ha-218458
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-218458-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218458-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:36:15.797600 2153586 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:36:15.797741 2153586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:36:15.797755 2153586 out.go:358] Setting ErrFile to fd 2...
	I0120 15:36:15.797761 2153586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:36:15.797954 2153586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:36:15.798169 2153586 out.go:352] Setting JSON to false
	I0120 15:36:15.798209 2153586 mustload.go:65] Loading cluster: ha-218458
	I0120 15:36:15.798352 2153586 notify.go:220] Checking for updates...
	I0120 15:36:15.798685 2153586 config.go:182] Loaded profile config "ha-218458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:36:15.798710 2153586 status.go:174] checking status of ha-218458 ...
	I0120 15:36:15.799132 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:15.799176 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:15.817755 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0120 15:36:15.818283 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:15.819032 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:15.819080 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:15.819535 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:15.819780 2153586 main.go:141] libmachine: (ha-218458) Calling .GetState
	I0120 15:36:15.821582 2153586 status.go:371] ha-218458 host status = "Running" (err=<nil>)
	I0120 15:36:15.821599 2153586 host.go:66] Checking if "ha-218458" exists ...
	I0120 15:36:15.821880 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:15.821915 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:15.837421 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0120 15:36:15.837909 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:15.838411 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:15.838434 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:15.838775 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:15.839016 2153586 main.go:141] libmachine: (ha-218458) Calling .GetIP
	I0120 15:36:15.842399 2153586 main.go:141] libmachine: (ha-218458) DBG | domain ha-218458 has defined MAC address 52:54:00:6d:17:44 in network mk-ha-218458
	I0120 15:36:15.843037 2153586 main.go:141] libmachine: (ha-218458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:17:44", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:30:20 +0000 UTC Type:0 Mac:52:54:00:6d:17:44 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-218458 Clientid:01:52:54:00:6d:17:44}
	I0120 15:36:15.843068 2153586 main.go:141] libmachine: (ha-218458) DBG | domain ha-218458 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:17:44 in network mk-ha-218458
	I0120 15:36:15.843241 2153586 host.go:66] Checking if "ha-218458" exists ...
	I0120 15:36:15.843645 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:15.843694 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:15.860135 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
	I0120 15:36:15.860608 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:15.861080 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:15.861101 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:15.861415 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:15.861647 2153586 main.go:141] libmachine: (ha-218458) Calling .DriverName
	I0120 15:36:15.861856 2153586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:36:15.861890 2153586 main.go:141] libmachine: (ha-218458) Calling .GetSSHHostname
	I0120 15:36:15.864755 2153586 main.go:141] libmachine: (ha-218458) DBG | domain ha-218458 has defined MAC address 52:54:00:6d:17:44 in network mk-ha-218458
	I0120 15:36:15.865125 2153586 main.go:141] libmachine: (ha-218458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:17:44", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:30:20 +0000 UTC Type:0 Mac:52:54:00:6d:17:44 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-218458 Clientid:01:52:54:00:6d:17:44}
	I0120 15:36:15.865156 2153586 main.go:141] libmachine: (ha-218458) DBG | domain ha-218458 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:17:44 in network mk-ha-218458
	I0120 15:36:15.865282 2153586 main.go:141] libmachine: (ha-218458) Calling .GetSSHPort
	I0120 15:36:15.865477 2153586 main.go:141] libmachine: (ha-218458) Calling .GetSSHKeyPath
	I0120 15:36:15.865625 2153586 main.go:141] libmachine: (ha-218458) Calling .GetSSHUsername
	I0120 15:36:15.865805 2153586 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/ha-218458/id_rsa Username:docker}
	I0120 15:36:15.969888 2153586 ssh_runner.go:195] Run: systemctl --version
	I0120 15:36:15.981473 2153586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:36:16.000196 2153586 kubeconfig.go:125] found "ha-218458" server: "https://192.168.39.254:8443"
	I0120 15:36:16.000261 2153586 api_server.go:166] Checking apiserver status ...
	I0120 15:36:16.000314 2153586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:36:16.018973 2153586 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0120 15:36:16.029742 2153586 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 15:36:16.029806 2153586 ssh_runner.go:195] Run: ls
	I0120 15:36:16.035198 2153586 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 15:36:16.039979 2153586 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 15:36:16.040007 2153586 status.go:463] ha-218458 apiserver status = Running (err=<nil>)
	I0120 15:36:16.040037 2153586 status.go:176] ha-218458 status: &{Name:ha-218458 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:36:16.040071 2153586 status.go:174] checking status of ha-218458-m02 ...
	I0120 15:36:16.040380 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.040430 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.056582 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0120 15:36:16.057191 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.057709 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.057733 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.058063 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.058305 2153586 main.go:141] libmachine: (ha-218458-m02) Calling .GetState
	I0120 15:36:16.060015 2153586 status.go:371] ha-218458-m02 host status = "Stopped" (err=<nil>)
	I0120 15:36:16.060032 2153586 status.go:384] host is not running, skipping remaining checks
	I0120 15:36:16.060040 2153586 status.go:176] ha-218458-m02 status: &{Name:ha-218458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:36:16.060063 2153586 status.go:174] checking status of ha-218458-m03 ...
	I0120 15:36:16.060518 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.060569 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.077086 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0120 15:36:16.077556 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.078118 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.078145 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.078494 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.078752 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetState
	I0120 15:36:16.080398 2153586 status.go:371] ha-218458-m03 host status = "Running" (err=<nil>)
	I0120 15:36:16.080419 2153586 host.go:66] Checking if "ha-218458-m03" exists ...
	I0120 15:36:16.080827 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.080874 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.097085 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0120 15:36:16.097552 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.098035 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.098073 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.098432 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.098694 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetIP
	I0120 15:36:16.101630 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | domain ha-218458-m03 has defined MAC address 52:54:00:56:3d:ad in network mk-ha-218458
	I0120 15:36:16.102107 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:3d:ad", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:32:23 +0000 UTC Type:0 Mac:52:54:00:56:3d:ad Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-218458-m03 Clientid:01:52:54:00:56:3d:ad}
	I0120 15:36:16.102138 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | domain ha-218458-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:56:3d:ad in network mk-ha-218458
	I0120 15:36:16.102374 2153586 host.go:66] Checking if "ha-218458-m03" exists ...
	I0120 15:36:16.102716 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.102761 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.118540 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I0120 15:36:16.119104 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.119617 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.119640 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.119952 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.120144 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .DriverName
	I0120 15:36:16.120350 2153586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:36:16.120375 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetSSHHostname
	I0120 15:36:16.123483 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | domain ha-218458-m03 has defined MAC address 52:54:00:56:3d:ad in network mk-ha-218458
	I0120 15:36:16.123889 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:3d:ad", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:32:23 +0000 UTC Type:0 Mac:52:54:00:56:3d:ad Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-218458-m03 Clientid:01:52:54:00:56:3d:ad}
	I0120 15:36:16.123919 2153586 main.go:141] libmachine: (ha-218458-m03) DBG | domain ha-218458-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:56:3d:ad in network mk-ha-218458
	I0120 15:36:16.124050 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetSSHPort
	I0120 15:36:16.124215 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetSSHKeyPath
	I0120 15:36:16.124362 2153586 main.go:141] libmachine: (ha-218458-m03) Calling .GetSSHUsername
	I0120 15:36:16.124468 2153586 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/ha-218458-m03/id_rsa Username:docker}
	I0120 15:36:16.213658 2153586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:36:16.232632 2153586 kubeconfig.go:125] found "ha-218458" server: "https://192.168.39.254:8443"
	I0120 15:36:16.232668 2153586 api_server.go:166] Checking apiserver status ...
	I0120 15:36:16.232723 2153586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:36:16.249199 2153586 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0120 15:36:16.260146 2153586 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 15:36:16.260225 2153586 ssh_runner.go:195] Run: ls
	I0120 15:36:16.265290 2153586 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 15:36:16.271538 2153586 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 15:36:16.271569 2153586 status.go:463] ha-218458-m03 apiserver status = Running (err=<nil>)
	I0120 15:36:16.271579 2153586 status.go:176] ha-218458-m03 status: &{Name:ha-218458-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:36:16.271607 2153586 status.go:174] checking status of ha-218458-m04 ...
	I0120 15:36:16.271925 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.271975 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.288478 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I0120 15:36:16.288926 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.289446 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.289470 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.289822 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.290078 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetState
	I0120 15:36:16.291552 2153586 status.go:371] ha-218458-m04 host status = "Running" (err=<nil>)
	I0120 15:36:16.291568 2153586 host.go:66] Checking if "ha-218458-m04" exists ...
	I0120 15:36:16.291890 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.291935 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.307558 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0120 15:36:16.308094 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.308638 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.308664 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.308993 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.309241 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetIP
	I0120 15:36:16.312327 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | domain ha-218458-m04 has defined MAC address 52:54:00:19:54:8b in network mk-ha-218458
	I0120 15:36:16.312833 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:54:8b", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:33:50 +0000 UTC Type:0 Mac:52:54:00:19:54:8b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-218458-m04 Clientid:01:52:54:00:19:54:8b}
	I0120 15:36:16.312856 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | domain ha-218458-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:19:54:8b in network mk-ha-218458
	I0120 15:36:16.313019 2153586 host.go:66] Checking if "ha-218458-m04" exists ...
	I0120 15:36:16.313361 2153586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:36:16.313403 2153586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:36:16.329264 2153586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0120 15:36:16.329735 2153586 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:36:16.330323 2153586 main.go:141] libmachine: Using API Version  1
	I0120 15:36:16.330355 2153586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:36:16.330754 2153586 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:36:16.331011 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .DriverName
	I0120 15:36:16.331219 2153586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:36:16.331244 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetSSHHostname
	I0120 15:36:16.334517 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | domain ha-218458-m04 has defined MAC address 52:54:00:19:54:8b in network mk-ha-218458
	I0120 15:36:16.334941 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:54:8b", ip: ""} in network mk-ha-218458: {Iface:virbr1 ExpiryTime:2025-01-20 16:33:50 +0000 UTC Type:0 Mac:52:54:00:19:54:8b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-218458-m04 Clientid:01:52:54:00:19:54:8b}
	I0120 15:36:16.334971 2153586 main.go:141] libmachine: (ha-218458-m04) DBG | domain ha-218458-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:19:54:8b in network mk-ha-218458
	I0120 15:36:16.335088 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetSSHPort
	I0120 15:36:16.335279 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetSSHKeyPath
	I0120 15:36:16.335440 2153586 main.go:141] libmachine: (ha-218458-m04) Calling .GetSSHUsername
	I0120 15:36:16.335579 2153586 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/ha-218458-m04/id_rsa Username:docker}
	I0120 15:36:16.420161 2153586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:36:16.437258 2153586 status.go:176] ha-218458-m04 status: &{Name:ha-218458-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-218458 node start m02 -v=7 --alsologtostderr: (53.255393305s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (54.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (446.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-218458 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-218458 -v=7 --alsologtostderr
E0120 15:37:14.250464 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:37:29.524958 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:39:45.664372 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:40:13.366453 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-218458 -v=7 --alsologtostderr: (4m34.375121207s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218458 --wait=true -v=7 --alsologtostderr
E0120 15:42:14.250831 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-218458 --wait=true -v=7 --alsologtostderr: (2m51.581683048s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-218458
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (446.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 node delete m03 -v=7 --alsologtostderr
E0120 15:44:45.663957 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-218458 node delete m03 -v=7 --alsologtostderr: (17.561523507s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 stop -v=7 --alsologtostderr
E0120 15:47:14.250382 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-218458 stop -v=7 --alsologtostderr: (4m32.885163449s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr: exit status 7 (121.05807ms)

                                                
                                                
-- stdout --
	ha-218458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-218458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-218458-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:49:30.333258 2157861 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:49:30.333510 2157861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:49:30.333519 2157861 out.go:358] Setting ErrFile to fd 2...
	I0120 15:49:30.333523 2157861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:49:30.333707 2157861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 15:49:30.333910 2157861 out.go:352] Setting JSON to false
	I0120 15:49:30.333944 2157861 mustload.go:65] Loading cluster: ha-218458
	I0120 15:49:30.333990 2157861 notify.go:220] Checking for updates...
	I0120 15:49:30.334409 2157861 config.go:182] Loaded profile config "ha-218458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 15:49:30.334433 2157861 status.go:174] checking status of ha-218458 ...
	I0120 15:49:30.334902 2157861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:49:30.334946 2157861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:49:30.360374 2157861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0120 15:49:30.360869 2157861 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:49:30.361624 2157861 main.go:141] libmachine: Using API Version  1
	I0120 15:49:30.361664 2157861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:49:30.362110 2157861 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:49:30.362351 2157861 main.go:141] libmachine: (ha-218458) Calling .GetState
	I0120 15:49:30.364538 2157861 status.go:371] ha-218458 host status = "Stopped" (err=<nil>)
	I0120 15:49:30.364561 2157861 status.go:384] host is not running, skipping remaining checks
	I0120 15:49:30.364567 2157861 status.go:176] ha-218458 status: &{Name:ha-218458 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:49:30.364635 2157861 status.go:174] checking status of ha-218458-m02 ...
	I0120 15:49:30.364917 2157861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:49:30.364951 2157861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:49:30.380142 2157861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0120 15:49:30.380572 2157861 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:49:30.381046 2157861 main.go:141] libmachine: Using API Version  1
	I0120 15:49:30.381080 2157861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:49:30.381371 2157861 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:49:30.381559 2157861 main.go:141] libmachine: (ha-218458-m02) Calling .GetState
	I0120 15:49:30.383437 2157861 status.go:371] ha-218458-m02 host status = "Stopped" (err=<nil>)
	I0120 15:49:30.383454 2157861 status.go:384] host is not running, skipping remaining checks
	I0120 15:49:30.383461 2157861 status.go:176] ha-218458-m02 status: &{Name:ha-218458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:49:30.383483 2157861 status.go:174] checking status of ha-218458-m04 ...
	I0120 15:49:30.383761 2157861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 15:49:30.383795 2157861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 15:49:30.398813 2157861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0120 15:49:30.399375 2157861 main.go:141] libmachine: () Calling .GetVersion
	I0120 15:49:30.399913 2157861 main.go:141] libmachine: Using API Version  1
	I0120 15:49:30.399941 2157861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 15:49:30.400294 2157861 main.go:141] libmachine: () Calling .GetMachineName
	I0120 15:49:30.400523 2157861 main.go:141] libmachine: (ha-218458-m04) Calling .GetState
	I0120 15:49:30.402154 2157861 status.go:371] ha-218458-m04 host status = "Stopped" (err=<nil>)
	I0120 15:49:30.402169 2157861 status.go:384] host is not running, skipping remaining checks
	I0120 15:49:30.402174 2157861 status.go:176] ha-218458-m04 status: &{Name:ha-218458-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218458 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 15:49:45.664319 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:50:17.318472 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:51:08.727841 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-218458 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m7.209635561s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (128.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-218458 --control-plane -v=7 --alsologtostderr
E0120 15:52:14.254845 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-218458 --control-plane -v=7 --alsologtostderr: (1m18.017298553s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-218458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-533012 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-533012 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.704559934s)
--- PASS: TestJSONOutput/start/Command (59.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-533012 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-533012 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-533012 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-533012 --output=json --user=testUser: (7.380345034s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-686704 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-686704 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.797097ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f239cc7d-47d6-4cb6-ba05-58ddada0895e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-686704] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3c66807-9a4c-495f-9e55-e821c2973cea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"17af2f8b-84cf-4e2f-82e6-88d08ddafca2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8b2f24ff-d6b6-4423-939e-60510191145c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig"}}
	{"specversion":"1.0","id":"e80f5500-294b-48a5-b690-5b936a56e529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube"}}
	{"specversion":"1.0","id":"91c1a1a7-3d40-4bc5-a81d-2ddabebae0a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4601558b-ed2c-49ac-b243-aa4661dee1c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"61d9d99c-328d-41c2-907d-46aaaa3c76d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-686704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-686704
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-388072 --driver=kvm2  --container-runtime=crio
E0120 15:54:45.663629 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-388072 --driver=kvm2  --container-runtime=crio: (43.287856869s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-400295 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-400295 --driver=kvm2  --container-runtime=crio: (46.167420711s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-388072
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-400295
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-400295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-400295
helpers_test.go:175: Cleaning up "first-388072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-388072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-388072: (1.019017356s)
--- PASS: TestMinikubeProfile (92.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-506088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-506088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.231594947s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-506088 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-506088 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-524139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-524139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.790906919s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-506088 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-524139
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-524139: (2.295210313s)
--- PASS: TestMountStart/serial/Stop (2.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-524139
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-524139: (22.565346343s)
--- PASS: TestMountStart/serial/RestartStopped (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-647253 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 15:57:14.250443 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-647253 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.608032298s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-647253 -- rollout status deployment/busybox: (3.846530176s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-5pwl8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-z4vg8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-5pwl8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-z4vg8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-5pwl8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-z4vg8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-5pwl8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-5pwl8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-z4vg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-647253 -- exec busybox-58667487b6-z4vg8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-647253 -v 3 --alsologtostderr
E0120 15:59:45.663928 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-647253 -v 3 --alsologtostderr: (51.155691561s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-647253 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp testdata/cp-test.txt multinode-647253:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3844696929/001/cp-test_multinode-647253.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253:/home/docker/cp-test.txt multinode-647253-m02:/home/docker/cp-test_multinode-647253_multinode-647253-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test_multinode-647253_multinode-647253-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253:/home/docker/cp-test.txt multinode-647253-m03:/home/docker/cp-test_multinode-647253_multinode-647253-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test_multinode-647253_multinode-647253-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp testdata/cp-test.txt multinode-647253-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3844696929/001/cp-test_multinode-647253-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m02:/home/docker/cp-test.txt multinode-647253:/home/docker/cp-test_multinode-647253-m02_multinode-647253.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test_multinode-647253-m02_multinode-647253.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m02:/home/docker/cp-test.txt multinode-647253-m03:/home/docker/cp-test_multinode-647253-m02_multinode-647253-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test_multinode-647253-m02_multinode-647253-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp testdata/cp-test.txt multinode-647253-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3844696929/001/cp-test_multinode-647253-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m03:/home/docker/cp-test.txt multinode-647253:/home/docker/cp-test_multinode-647253-m03_multinode-647253.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253 "sudo cat /home/docker/cp-test_multinode-647253-m03_multinode-647253.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 cp multinode-647253-m03:/home/docker/cp-test.txt multinode-647253-m02:/home/docker/cp-test_multinode-647253-m03_multinode-647253-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 ssh -n multinode-647253-m02 "sudo cat /home/docker/cp-test_multinode-647253-m03_multinode-647253-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-647253 node stop m03: (1.546636128s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-647253 status: exit status 7 (453.500045ms)

                                                
                                                
-- stdout --
	multinode-647253
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-647253-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-647253-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr: exit status 7 (445.826474ms)

                                                
                                                
-- stdout --
	multinode-647253
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-647253-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-647253-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:00:16.526492 2165597 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:00:16.526620 2165597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:00:16.526632 2165597 out.go:358] Setting ErrFile to fd 2...
	I0120 16:00:16.526638 2165597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:00:16.526873 2165597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:00:16.527109 2165597 out.go:352] Setting JSON to false
	I0120 16:00:16.527143 2165597 mustload.go:65] Loading cluster: multinode-647253
	I0120 16:00:16.527247 2165597 notify.go:220] Checking for updates...
	I0120 16:00:16.527624 2165597 config.go:182] Loaded profile config "multinode-647253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:00:16.527650 2165597 status.go:174] checking status of multinode-647253 ...
	I0120 16:00:16.528178 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.528234 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.544863 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0120 16:00:16.545483 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.546228 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.546256 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.546817 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.547102 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetState
	I0120 16:00:16.548913 2165597 status.go:371] multinode-647253 host status = "Running" (err=<nil>)
	I0120 16:00:16.548931 2165597 host.go:66] Checking if "multinode-647253" exists ...
	I0120 16:00:16.549250 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.549298 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.565849 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33935
	I0120 16:00:16.566384 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.566974 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.567002 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.567354 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.567575 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetIP
	I0120 16:00:16.570450 2165597 main.go:141] libmachine: (multinode-647253) DBG | domain multinode-647253 has defined MAC address 52:54:00:0c:63:5b in network mk-multinode-647253
	I0120 16:00:16.570994 2165597 main.go:141] libmachine: (multinode-647253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:63:5b", ip: ""} in network mk-multinode-647253: {Iface:virbr1 ExpiryTime:2025-01-20 16:57:24 +0000 UTC Type:0 Mac:52:54:00:0c:63:5b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:multinode-647253 Clientid:01:52:54:00:0c:63:5b}
	I0120 16:00:16.571031 2165597 main.go:141] libmachine: (multinode-647253) DBG | domain multinode-647253 has defined IP address 192.168.39.236 and MAC address 52:54:00:0c:63:5b in network mk-multinode-647253
	I0120 16:00:16.571163 2165597 host.go:66] Checking if "multinode-647253" exists ...
	I0120 16:00:16.571474 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.571525 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.587617 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0120 16:00:16.588105 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.588671 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.588700 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.589063 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.589288 2165597 main.go:141] libmachine: (multinode-647253) Calling .DriverName
	I0120 16:00:16.589491 2165597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 16:00:16.589527 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetSSHHostname
	I0120 16:00:16.592573 2165597 main.go:141] libmachine: (multinode-647253) DBG | domain multinode-647253 has defined MAC address 52:54:00:0c:63:5b in network mk-multinode-647253
	I0120 16:00:16.593040 2165597 main.go:141] libmachine: (multinode-647253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:63:5b", ip: ""} in network mk-multinode-647253: {Iface:virbr1 ExpiryTime:2025-01-20 16:57:24 +0000 UTC Type:0 Mac:52:54:00:0c:63:5b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:multinode-647253 Clientid:01:52:54:00:0c:63:5b}
	I0120 16:00:16.593059 2165597 main.go:141] libmachine: (multinode-647253) DBG | domain multinode-647253 has defined IP address 192.168.39.236 and MAC address 52:54:00:0c:63:5b in network mk-multinode-647253
	I0120 16:00:16.593220 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetSSHPort
	I0120 16:00:16.593450 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetSSHKeyPath
	I0120 16:00:16.593610 2165597 main.go:141] libmachine: (multinode-647253) Calling .GetSSHUsername
	I0120 16:00:16.593728 2165597 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/multinode-647253/id_rsa Username:docker}
	I0120 16:00:16.675067 2165597 ssh_runner.go:195] Run: systemctl --version
	I0120 16:00:16.682057 2165597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:00:16.699269 2165597 kubeconfig.go:125] found "multinode-647253" server: "https://192.168.39.236:8443"
	I0120 16:00:16.699316 2165597 api_server.go:166] Checking apiserver status ...
	I0120 16:00:16.699354 2165597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:00:16.716400 2165597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0120 16:00:16.728029 2165597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 16:00:16.728103 2165597 ssh_runner.go:195] Run: ls
	I0120 16:00:16.732710 2165597 api_server.go:253] Checking apiserver healthz at https://192.168.39.236:8443/healthz ...
	I0120 16:00:16.738375 2165597 api_server.go:279] https://192.168.39.236:8443/healthz returned 200:
	ok
	I0120 16:00:16.738402 2165597 status.go:463] multinode-647253 apiserver status = Running (err=<nil>)
	I0120 16:00:16.738427 2165597 status.go:176] multinode-647253 status: &{Name:multinode-647253 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:00:16.738473 2165597 status.go:174] checking status of multinode-647253-m02 ...
	I0120 16:00:16.738857 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.738906 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.755813 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I0120 16:00:16.756427 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.756981 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.757006 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.757341 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.757562 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetState
	I0120 16:00:16.759209 2165597 status.go:371] multinode-647253-m02 host status = "Running" (err=<nil>)
	I0120 16:00:16.759230 2165597 host.go:66] Checking if "multinode-647253-m02" exists ...
	I0120 16:00:16.759520 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.759558 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.775622 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0120 16:00:16.776107 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.776650 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.776675 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.776961 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.777124 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetIP
	I0120 16:00:16.780015 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | domain multinode-647253-m02 has defined MAC address 52:54:00:ea:52:7f in network mk-multinode-647253
	I0120 16:00:16.780473 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:52:7f", ip: ""} in network mk-multinode-647253: {Iface:virbr1 ExpiryTime:2025-01-20 16:58:32 +0000 UTC Type:0 Mac:52:54:00:ea:52:7f Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-647253-m02 Clientid:01:52:54:00:ea:52:7f}
	I0120 16:00:16.780515 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | domain multinode-647253-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:ea:52:7f in network mk-multinode-647253
	I0120 16:00:16.780641 2165597 host.go:66] Checking if "multinode-647253-m02" exists ...
	I0120 16:00:16.781112 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.781166 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.797005 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0120 16:00:16.797514 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.798095 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.798119 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.798427 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.798637 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .DriverName
	I0120 16:00:16.798801 2165597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 16:00:16.798824 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetSSHHostname
	I0120 16:00:16.801764 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | domain multinode-647253-m02 has defined MAC address 52:54:00:ea:52:7f in network mk-multinode-647253
	I0120 16:00:16.802227 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:52:7f", ip: ""} in network mk-multinode-647253: {Iface:virbr1 ExpiryTime:2025-01-20 16:58:32 +0000 UTC Type:0 Mac:52:54:00:ea:52:7f Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-647253-m02 Clientid:01:52:54:00:ea:52:7f}
	I0120 16:00:16.802256 2165597 main.go:141] libmachine: (multinode-647253-m02) DBG | domain multinode-647253-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:ea:52:7f in network mk-multinode-647253
	I0120 16:00:16.802462 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetSSHPort
	I0120 16:00:16.802711 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetSSHKeyPath
	I0120 16:00:16.802878 2165597 main.go:141] libmachine: (multinode-647253-m02) Calling .GetSSHUsername
	I0120 16:00:16.803076 2165597 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/multinode-647253-m02/id_rsa Username:docker}
	I0120 16:00:16.882241 2165597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:00:16.898409 2165597 status.go:176] multinode-647253-m02 status: &{Name:multinode-647253-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:00:16.898465 2165597 status.go:174] checking status of multinode-647253-m03 ...
	I0120 16:00:16.898912 2165597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:00:16.898971 2165597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:00:16.915628 2165597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0120 16:00:16.916139 2165597 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:00:16.916698 2165597 main.go:141] libmachine: Using API Version  1
	I0120 16:00:16.916727 2165597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:00:16.917061 2165597 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:00:16.917271 2165597 main.go:141] libmachine: (multinode-647253-m03) Calling .GetState
	I0120 16:00:16.919314 2165597 status.go:371] multinode-647253-m03 host status = "Stopped" (err=<nil>)
	I0120 16:00:16.919333 2165597 status.go:384] host is not running, skipping remaining checks
	I0120 16:00:16.919341 2165597 status.go:176] multinode-647253-m03 status: &{Name:multinode-647253-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-647253 node start m03 -v=7 --alsologtostderr: (38.161122184s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-647253
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-647253
E0120 16:02:14.255910 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-647253: (3m3.20495963s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-647253 --wait=true -v=8 --alsologtostderr
E0120 16:04:45.664353 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-647253 --wait=true -v=8 --alsologtostderr: (2m25.74672574s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-647253
--- PASS: TestMultiNode/serial/RestartKeepsNodes (329.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-647253 node delete m03: (2.178937339s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 stop
E0120 16:06:57.321480 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:07:14.257129 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:07:48.730816 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-647253 stop: (3m1.934256976s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-647253 status: exit status 7 (97.397703ms)

                                                
                                                
-- stdout --
	multinode-647253
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-647253-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr: exit status 7 (93.383002ms)

                                                
                                                
-- stdout --
	multinode-647253
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-647253-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:09:29.612381 2168576 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:09:29.612501 2168576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:09:29.612510 2168576 out.go:358] Setting ErrFile to fd 2...
	I0120 16:09:29.612514 2168576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:09:29.612730 2168576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:09:29.612920 2168576 out.go:352] Setting JSON to false
	I0120 16:09:29.612956 2168576 mustload.go:65] Loading cluster: multinode-647253
	I0120 16:09:29.613108 2168576 notify.go:220] Checking for updates...
	I0120 16:09:29.613547 2168576 config.go:182] Loaded profile config "multinode-647253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:09:29.613578 2168576 status.go:174] checking status of multinode-647253 ...
	I0120 16:09:29.614091 2168576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:09:29.614149 2168576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:09:29.633022 2168576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0120 16:09:29.633494 2168576 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:09:29.634116 2168576 main.go:141] libmachine: Using API Version  1
	I0120 16:09:29.634142 2168576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:09:29.634501 2168576 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:09:29.634714 2168576 main.go:141] libmachine: (multinode-647253) Calling .GetState
	I0120 16:09:29.636550 2168576 status.go:371] multinode-647253 host status = "Stopped" (err=<nil>)
	I0120 16:09:29.636576 2168576 status.go:384] host is not running, skipping remaining checks
	I0120 16:09:29.636582 2168576 status.go:176] multinode-647253 status: &{Name:multinode-647253 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:09:29.636607 2168576 status.go:174] checking status of multinode-647253-m02 ...
	I0120 16:09:29.636911 2168576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 16:09:29.636961 2168576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 16:09:29.652168 2168576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0120 16:09:29.652729 2168576 main.go:141] libmachine: () Calling .GetVersion
	I0120 16:09:29.653249 2168576 main.go:141] libmachine: Using API Version  1
	I0120 16:09:29.653270 2168576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 16:09:29.653639 2168576 main.go:141] libmachine: () Calling .GetMachineName
	I0120 16:09:29.653883 2168576 main.go:141] libmachine: (multinode-647253-m02) Calling .GetState
	I0120 16:09:29.655524 2168576 status.go:371] multinode-647253-m02 host status = "Stopped" (err=<nil>)
	I0120 16:09:29.655542 2168576 status.go:384] host is not running, skipping remaining checks
	I0120 16:09:29.655549 2168576 status.go:176] multinode-647253-m02 status: &{Name:multinode-647253-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-647253 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 16:09:45.663486 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-647253 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.781376587s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-647253 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-647253
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-647253-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-647253-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (70.898603ms)

                                                
                                                
-- stdout --
	* [multinode-647253-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-647253-m02' is duplicated with machine name 'multinode-647253-m02' in profile 'multinode-647253'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-647253-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-647253-m03 --driver=kvm2  --container-runtime=crio: (44.014623772s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-647253
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-647253: exit status 80 (228.724874ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-647253 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-647253-m03 already exists in multinode-647253-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-647253-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-647253-m03: (1.002214276s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.37s)

                                                
                                    
x
+
TestScheduledStopUnix (114.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-151210 --memory=2048 --driver=kvm2  --container-runtime=crio
E0120 16:17:14.250871 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-151210 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.172605759s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151210 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-151210 -n scheduled-stop-151210
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151210 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 16:17:26.432300 2136749 retry.go:31] will retry after 92.818µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.433451 2136749 retry.go:31] will retry after 161.476µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.434595 2136749 retry.go:31] will retry after 326.227µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.435734 2136749 retry.go:31] will retry after 280.521µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.436868 2136749 retry.go:31] will retry after 393.593µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.438033 2136749 retry.go:31] will retry after 451.281µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.439201 2136749 retry.go:31] will retry after 698.975µs: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.440380 2136749 retry.go:31] will retry after 1.723806ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.442629 2136749 retry.go:31] will retry after 3.240556ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.446856 2136749 retry.go:31] will retry after 2.013868ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.449093 2136749 retry.go:31] will retry after 4.479219ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.454349 2136749 retry.go:31] will retry after 9.364047ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.464595 2136749 retry.go:31] will retry after 18.02864ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.482789 2136749 retry.go:31] will retry after 22.703144ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
I0120 16:17:26.506078 2136749 retry.go:31] will retry after 33.719792ms: open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/scheduled-stop-151210/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151210 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151210 -n scheduled-stop-151210
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151210
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151210 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151210
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-151210: exit status 7 (75.917981ms)

                                                
                                                
-- stdout --
	scheduled-stop-151210
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151210 -n scheduled-stop-151210
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151210 -n scheduled-stop-151210: exit status 7 (77.194765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-151210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-151210
--- PASS: TestScheduledStopUnix (114.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1019582501 start -p running-upgrade-054640 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1019582501 start -p running-upgrade-054640 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.969604858s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-054640 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-054640 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.394394789s)
helpers_test.go:175: Cleaning up "running-upgrade-054640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-054640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-054640: (1.244366695s)
--- PASS: TestRunningBinaryUpgrade (151.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.222341ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-383886] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (124.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-383886 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-383886 --driver=kvm2  --container-runtime=crio: (2m4.349246755s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-383886 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (124.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.424267041 start -p stopped-upgrade-285935 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.424267041 start -p stopped-upgrade-285935 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m20.211189664s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.424267041 -p stopped-upgrade-285935 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.424267041 -p stopped-upgrade-285935 stop: (2.125141107s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-285935 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-285935 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.403351513s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.76582348s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-383886 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-383886 status -o json: exit status 2 (259.875057ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-383886","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-383886
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-383886: (1.03309574s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-383886 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.236430961s)
--- PASS: TestNoKubernetes/serial/Start (25.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-383886 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-383886 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.593355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.831562186s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.069030963s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-383886
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-383886: (1.396130873s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-383886 --driver=kvm2  --container-runtime=crio
E0120 16:22:14.249723 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-383886 --driver=kvm2  --container-runtime=crio: (35.776058962s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-285935
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-383886 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-383886 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.414132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (84.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-162976 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-162976 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m24.517826945s)
--- PASS: TestPause/serial/Start (84.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-708138 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-708138 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (513.133308ms)

                                                
                                                
-- stdout --
	* [false-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:24:57.208940 2178670 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:24:57.209133 2178670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:57.209145 2178670 out.go:358] Setting ErrFile to fd 2...
	I0120 16:24:57.209152 2178670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:24:57.209400 2178670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
	I0120 16:24:57.210051 2178670 out.go:352] Setting JSON to false
	I0120 16:24:57.211219 2178670 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":29243,"bootTime":1737361054,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:24:57.211291 2178670 start.go:139] virtualization: kvm guest
	I0120 16:24:57.213298 2178670 out.go:177] * [false-708138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:24:57.214473 2178670 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:24:57.214481 2178670 notify.go:220] Checking for updates...
	I0120 16:24:57.219050 2178670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:24:57.220265 2178670 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
	I0120 16:24:57.221592 2178670 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
	I0120 16:24:57.223262 2178670 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:24:57.224652 2178670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:24:57.226679 2178670 config.go:182] Loaded profile config "force-systemd-flag-860028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:24:57.226805 2178670 config.go:182] Loaded profile config "kubernetes-upgrade-207056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 16:24:57.226919 2178670 config.go:182] Loaded profile config "running-upgrade-054640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 16:24:57.227065 2178670 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:24:57.661292 2178670 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 16:24:57.662547 2178670 start.go:297] selected driver: kvm2
	I0120 16:24:57.662562 2178670 start.go:901] validating driver "kvm2" against <nil>
	I0120 16:24:57.662586 2178670 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:24:57.664841 2178670 out.go:201] 
	W0120 16:24:57.666136 2178670 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0120 16:24:57.667389 2178670 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-708138 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.22:8443
name: force-systemd-flag-860028
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.244:8443
name: running-upgrade-054640
contexts:
- context:
cluster: force-systemd-flag-860028
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: force-systemd-flag-860028
name: force-systemd-flag-860028
- context:
cluster: running-upgrade-054640
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-054640
name: running-upgrade-054640
current-context: force-systemd-flag-860028
kind: Config
preferences: {}
users:
- name: force-systemd-flag-860028
user:
client-certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/client.crt
client-key: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/force-systemd-flag-860028/client.key
- name: running-upgrade-054640
user:
client-certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.crt
client-key: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-708138

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-708138"

                                                
                                                
----------------------- debugLogs end: false-708138 [took: 3.911790665s] --------------------------------
helpers_test.go:175: Cleaning up "false-708138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-708138
--- PASS: TestNetworkPlugins/group/false (4.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (160.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-552545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-552545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (2m40.444820874s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (160.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-429406 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-429406 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m40.201723183s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-024679 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 16:27:14.249867 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-024679 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.401478302s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-429406 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b042b746-3822-429b-9e22-f8b863f243ec] Pending
helpers_test.go:344: "busybox" [b042b746-3822-429b-9e22-f8b863f243ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b042b746-3822-429b-9e22-f8b863f243ec] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004110499s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-429406 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-552545 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fc900a9b-fbb2-4243-a869-d421eb57ab0f] Pending
helpers_test.go:344: "busybox" [fc900a9b-fbb2-4243-a869-d421eb57ab0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fc900a9b-fbb2-4243-a869-d421eb57ab0f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005214599s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-552545 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-429406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-429406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07285244s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-429406 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-429406 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-429406 --alsologtostderr -v=3: (1m31.080122977s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-552545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-552545 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-552545 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-552545 --alsologtostderr -v=3: (1m31.086497979s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-024679 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa6112c4-7356-441c-b2cf-59985fa70ea5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa6112c4-7356-441c-b2cf-59985fa70ea5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003886549s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-024679 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-024679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-024679 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-024679 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-024679 --alsologtostderr -v=3: (1m31.345749544s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-429406 -n embed-certs-429406
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-429406 -n embed-certs-429406: exit status 7 (77.196534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-429406 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552545 -n no-preload-552545
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552545 -n no-preload-552545: exit status 7 (76.051832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-552545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (323.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-552545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-552545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (5m23.618274454s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552545 -n no-preload-552545
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (323.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679: exit status 7 (76.921674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-024679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-024679 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 16:29:45.664384 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-024679 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (5m20.593506038s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-806597 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-806597 --alsologtostderr -v=3: (4.318444313s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-806597 -n old-k8s-version-806597: exit status 7 (78.080523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-806597 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jqh9g" [7ac08632-7b75-4947-9def-bd29ff8fe741] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003968758s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jqh9g" [7ac08632-7b75-4947-9def-bd29ff8fe741] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005003051s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-552545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t8mw7" [f7857471-d949-4e4e-9859-1cc673c0b380] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005688876s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-552545 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-552545 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552545 -n no-preload-552545
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552545 -n no-preload-552545: exit status 2 (272.563975ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552545 -n no-preload-552545
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552545 -n no-preload-552545: exit status 2 (275.570751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-552545 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552545 -n no-preload-552545
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552545 -n no-preload-552545
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-369874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-369874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (54.796866221s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t8mw7" [f7857471-d949-4e4e-9859-1cc673c0b380] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004942233s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-024679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-024679 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-024679 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679: exit status 2 (273.511796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679: exit status 2 (278.616965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-024679 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-024679 -n default-k8s-diff-port-024679
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.751041876s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-369874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-369874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.131668982s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-369874 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-369874 --alsologtostderr -v=3: (7.372240687s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369874 -n newest-cni-369874
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369874 -n newest-cni-369874: exit status 7 (77.680632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-369874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:244: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-369874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.334692882s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-369874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-369874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (38.505147049s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-369874 -n newest-cni-369874
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-708138 "pgrep -a kubelet"
I0120 16:36:41.070869 2136749 config.go:182] Loaded profile config "auto-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-708138 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-scgzt" [ece3be60-ef67-4703-9bda-7cf5dbb89311] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-scgzt" [ece3be60-ef67-4703-9bda-7cf5dbb89311] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004634768s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-708138 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-369874 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-369874 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-369874 --alsologtostderr -v=1: (1.845244299s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369874 -n newest-cni-369874
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369874 -n newest-cni-369874: exit status 2 (380.254836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369874 -n newest-cni-369874
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369874 -n newest-cni-369874: exit status 2 (298.621912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-369874 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-369874 --alsologtostderr -v=1: (1.121943638s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-369874 -n newest-cni-369874
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-369874 -n newest-cni-369874
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.849927066s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ts8hm" [71fa90a3-534b-4c0f-a67a-67e4d6f75df9] Running
E0120 16:38:09.303844 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:38:11.310676 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0087603s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-708138 "pgrep -a kubelet"
I0120 16:38:13.488851 2136749 config.go:182] Loaded profile config "kindnet-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-708138 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-st2dh" [acf3b9cb-9578-48f2-b516-14af163b5935] Pending
E0120 16:38:14.426050 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-st2dh" [acf3b9cb-9578-48f2-b516-14af163b5935] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-st2dh" [acf3b9cb-9578-48f2-b516-14af163b5935] Running
E0120 16:38:24.667642 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004613158s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-708138 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0120 16:38:45.149621 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:39:12.753959 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:39:26.111453 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/default-k8s-diff-port-024679/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:39:45.663566 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/functional-232451/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m8.48985279s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-708138 "pgrep -a kubelet"
I0120 16:39:51.918422 2136749 config.go:182] Loaded profile config "custom-flannel-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-708138 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gms9f" [415c6810-1759-40ef-93d6-74f951d88b7b] Pending
helpers_test.go:344: "netcat-5d86dc444-gms9f" [415c6810-1759-40ef-93d6-74f951d88b7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gms9f" [415c6810-1759-40ef-93d6-74f951d88b7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005252831s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-708138 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0120 16:40:34.675918 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/no-preload-552545/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.417050677s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-708138 "pgrep -a kubelet"
I0120 16:41:45.794374 2136749 config.go:182] Loaded profile config "enable-default-cni-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-708138 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tsm72" [309aff41-dda1-4ae3-bb88-cfb7845b539c] Pending
E0120 16:41:46.474809 2136749 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/auto-708138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-tsm72" [309aff41-dda1-4ae3-bb88-cfb7845b539c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00533435s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-708138 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-708138 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m9.514483956s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-708138 "pgrep -a kubelet"
I0120 16:43:41.700606 2136749 config.go:182] Loaded profile config "bridge-708138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-708138 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sv7fz" [1466207a-6b6c-4c85-ad84-a51c500df137] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sv7fz" [1466207a-6b6c-4c85-ad84-a51c500df137] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004860157s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-708138 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-708138 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/304)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
270 TestStartStop/group/disable-driver-mounts 0.17
277 TestNetworkPlugins/group/kubenet 3.58
285 TestNetworkPlugins/group/cilium 4.03
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-823768 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-260995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-260995
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-708138 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.244:8443
name: running-upgrade-054640
contexts:
- context:
cluster: running-upgrade-054640
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-054640
name: running-upgrade-054640
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-054640
user:
client-certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.crt
client-key: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-708138

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-708138"

                                                
                                                
----------------------- debugLogs end: kubenet-708138 [took: 3.39510883s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-708138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-708138
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-708138 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-708138" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.244:8443
name: running-upgrade-054640
contexts:
- context:
cluster: running-upgrade-054640
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:24:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-054640
name: running-upgrade-054640
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-054640
user:
client-certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.crt
client-key: /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/running-upgrade-054640/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-708138

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-708138" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-708138"

                                                
                                                
----------------------- debugLogs end: cilium-708138 [took: 3.86436429s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-708138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-708138
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
Copied to clipboard